In this paper, we present a 3D shape modeling system based on Tsai Shah shape from shading (SFS) algorithm. This SFS provides partial 3D shapes, as depth maps of the object to be reconstructed. Our previously developed Projected Polygon Representation Neural Network (PPRNN) performed the reconstruction process. This neural network is able to successively refine the polygon vertices parameter of an initial 3D shape based on 2D images taken from multiple views. The reconstuction is finalized by mapping the texture of object image to the 3 D initial shape. It is known from static stereo analysis that even though multiple view images are used, obtaining 3D structure without considering of base-distance information, i.e. focal separation between different camera positions, is impossible. Unless there is something else is known about the scene. Here we propose the use of shading features to extrat the 3D depth maps by using a fasat SFS algortihm, instead of rendering the object based on bare 2D images. A beginning result of reconstructing human (mannequin) head and face is presented. From our experiment, it was shown that using only 2D images would result a poor reconstruction. While using the depth-maps provides a smoother and more realistic 3D object.