Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
3D Perspective Question
#1
This may be more of a question for a 3D CAD or some other sort of 3D forum, but I’m coming at it from the photo side of things. Likely someone has had a similar question.


I’d  like to use two flat 2d images, both with a common background in a composite image.  Both pictures are taken pointing the same direction.  One is just taken a good distance behind where the other one is from.


I can adjust the scale of the rear picture so the common distant objects are the same height. However, objects in foreground, close to the position of the camera are not the right height when compared to objects at that distance in the photo taken second picture a distance behind the first.


I saw that GIMP has a 3D perspective transform too, but it looks like it would not really be of any use for what I’m thinking.


I suppose maybe using a different lens for the two pictures is an option.


It also seems like this may be possible with software. I only use GIMP a little. I’ve used some 3D applications even less, but have used Sketchup a little most recently.  


It allows you to create a rectangle then pull or push the face of it up or down to create a cubic shape. You can then adjust the size of one of the faces to create some sort of trapezoidal cubic shape, create a beveled edge or what not. 


I was thinking if you could put an image on the original rectangle and leave it on it as you pulled it up and adjusted the size, maybe you could do this. The image is a raster image and the 3D app uses vector images, but in the intitual cubic shape after the rectangle has just been pulled up each pixel in the top rectangle would line up with the same pixel in the bottom rectangle and the could be connected with a straight line.  So any adjustments later, those lines would still be there. At this point the location of each pixel on the initial rectangle surface is known and now connected by lines there is the makings to create 3D geometry and relationships to create a 3D vector image.


So the ultimate image adjusted for perspective would be composed of pixels from a series of slices and where they fit on the surface of each slice. So if you adjusted the top layer so it is smaller, the middle would be the top slice, each outer slice would be the pixels on the lines that are outside the the boundaries of the top slice.


This is sort of how 3D printing is done, although using vector graphics all along.  I also remember where they took some donated human cadavers which I guess they may have added some dies to indicated blood vessels and other structures and frozen and sliced them into very thin slices and took a picture or created a digital image of each slice.  Then reassembled all the digital slices to create a 3D image of the bodies.


I probably don’t quite have it right how it all works, but it looks like something along these lines is already being done for other things.  Anyone know if there is an app or an easy way to do this?
Reply
#2
Some information I know as an amateur photographer. When you take a photo from an object with different lenses (different focus 24mm/35mm/100mm, etc) from the same distance to the object, then the photo is showing more ore less information around the object (same as digital zoom in smartphones). When one do the same (different lenses) but with a different distance, and want the same information on the photo, it result in photos where the distance between the objects is shorter or longer and the objects behind the object become bigger or disappear behind the object. Try it with a pencil in front of your eyes and hold the pencil in different distances, you see more or less information behind the pencil. What I am trying to say is that is nearly impossible to let two photos taken with different viewpoints (distance) fit them together. It is also difficult to blend several handheld photos (= several little different viewpoints), let us say, to a HDR. Programs that solve that problem blend the ghost information together so one think the photos fit but it's improved by some programmed magic. I've included a link that explain this stuff, look at the beginning to the photos with the little blue monsters.     
https://www.studiobinder.com/blog/focal-...explained/
Reply
#3
Gimp is not the tool. It is a 2D bitmap editor. You cannot generate a mesh to deform and map that to an image.
Any 'depth' in Gimp is created by applying light and shadow as in any 2D painting. One way is using a bump map: example https://i.imgur.com/6EiXWl3.jpg

For freeware (Just looked up 3DS Max - £234 a month - A*t*desk what a bunch of cr**ks))You should investigate Blender https://www.blender.org/
Since you are using linux there is an appimage here: https://www.blendernation.com/2020/08/19...-released/ might be good for a try-out
Reply
#4
Thanks. I'm aware of Blender. I did open it once, maybe twice. Couldn't immediately figure out how to use it. I didn't have any need at the time either. I'll take another look.

I also agree with the characterization of the other software you reference.
Reply


Forum Jump: