Tumgik
#i don't know how to motion blur things in paint 3d help
thebad-lydrawn-sanses · 4 months
Note
nnnnniightmarrwrrrr hes so WIFE!!!! i wanna give him a HUG!!!! (i will probably die mid-process)
Tumblr media Tumblr media Tumblr media
Nightmare: QUESO Y ARROZ, YOU COULD'VE ASKED FIRST- Nightmare: (LET GO OF ME) CC: *motion blur* CC: ~crunch~ Anon: worth…
QUESO Y ARROZ // CHEESE AND RICE
118 notes · View notes
canmom · 3 months
Text
cameras & 3d
belatedly learning how to actually use a proper camera. it's funny I know a reasonable amount about geometric optics and I could explain mathematically how a camera works, but I hadn't managed to connect that to practical knowledge of 'how aperture priority mode works'.
so no wonder so many shots at the concert turned out blurry, I hadn't twigged that you're supposed to adjust the aperture and ISO until the auto shutter speed becomes reasonable (and the focal depth is appropriate for your intent). I also wasn't really using the different autofocus-area settings to full effect so I'm sure it sometimes focused on the wrong thing.
I had it on auto ISO and f/2.8 the whole time. does explain why it was fairly easy to get the bokeh since that's the lowest f-number available on this lens (apparently very good for a zoom lens). but I should probably have manually cranked up the ISO to better handle the darkness of the room.
still, I have learned what the buttons and dials on the camera do now so hopefully the next lot of photos will be nicer. and hopefully learning a bit more about photography will also help me get better at 3D rendering lol.
in a real camera, the ISO, exposure time and aperture size all affect the brightness of the image, and each comes with drawbacks. increasing aperture size increases the amount of bokeh (equivalently, narrows the focal plane). increasing ISO (sensitivity) increases the amount of grain in low light. increasing exposure time increases the amount of motion blur.
3D graphics is simulating a camera, but it comes with its own parameters and language. in 3D, by default you get pinhole-perfect sharpness. in rasterisation, depth of field is a postprocessing shader which blurs the image based on depth. in pathtracing, it's a setting you can turn on which I believe affects how rays are traced from the camera. in Blender, you can input an f-number, but it's just another slider you can fiddle with so I don't tend to pay much attention to the exact number when I adjust it.
ISO and exposure time is not really a thing in 3D. the brightness of your scene is something you choose when converting from the scene-referred floating point output of the renderer to the final display-referred integer colour (blender offers a handful of presets here). every real camera sensor has a limited dynamic range, but since the rendering is done in floating point, in 3D you have a near-infinite dynamic range at the 'sensor' stage.
motion blur in 3D is completely optional - again it's something you'd do in a post-processing shader which blurs the image along motion vectors, or by adjusting your renderer settings in a path tracer to trace some rays at different simulated times. the graininess of the image in a pathtracing renderer is really more a function of how many samples you trace, which has no connection to the duration of the simulated exposure in the scene.
so while 3D rendering prepares you for photography in some ways, it definitely leaves some gaps! but knowing a bit more about how real cameras work will be useful if I want to render something realistically or fake it in a painting.
11 notes · View notes