Today I tried my hand at photogrammetry (again). This time I was determined to succeed, and succeed I did, sorta…
A few moons ago, I tried to 3D scan one of my bakugan, i.e. Pyrus Trunkanious, with pretty bad results… here have a look.
The problem was I only did a single overhead pass, and that resulted in the legs and the whole bottom area not rendering properly, as well as very bad mesh being generated.
Now after reading through a couple of tutorials, I had realized it is possible to do multiple passes but SHIFTING the object around, as long as there is significant overlap (80-90%) between the images!! What a revelation!
Armed with this new knowledge, I decided to perform a 3D scan of Darkus Pharol. I did 3 passes, one normally standing, and 2 on the sides. I made sure to do at least 360 degrees per pass to ensure enough overlap of the images.
The first step was the extraction of images from the 4K video I took. This was done automatically by the software, at a rate of 1 fps. So the 78 seconds video I took per pass, resulted in 78 photos. With 3 passes, it came up to be around 240 images, or camera angles.
Next step was adding the masks to the images to ensure only Pharol was being processed, and not the stand or background. This was perhaps the most tedious part, as each image had to be processed. The automatic algorithm was very helpful, but still needed much manual intervention. This took around 1 hour to complete.
Next was just letting the software run its course. It took a total of around 3 hours as I selected the high details settings.
The first run was a total mess. Nothing recognizable was obtained. It was pretty disappointing. I tried to regenerate the mesh from previous points, but it didn’t change anything much.
Here I concluded a few things:
- The angle of incline during the taking of images was too high. I had to lower the angle
- The distance of the camera should be as similar as possible between the takes
- Increase the aperture of the lens to F11 to reduce DOF and increase the sharpness
- Increase exposure to increase details seen in images
With that I tried again with another 4 hours of processing time. This had much better results!!
Overall, there were still problems with the mesh. I think it is because of the following:
- Lack of detail: The images I used were extraceted from 4K video, which is only about 10MPx, especially since the Bakugan is so small itself, much details were lost
- Masking issues: During this run, I allowed the software to fully automatically perform the masking, and so around 30% of the images were discarded. If I were to have performed it manually, there might have been more details available.
For anyone interested in looking at the actual model, I’ve uploaded it to SketchFab:
I am satisfied with this round of testing. Eventually I will try again, hopefully with even better results.
Thanks for reading this post!
See you in the next one!