Tips for easy and affordable 3D scanning

Article / 25 December 2021

Goals of this article: Getting you up to speed on latest developments in 3d scanning so that you can reliably output the meshes you need for free or at low cost, no matter your hardware.

Level: Intermediate (I will assume some familiarity with Android/iOS and windows/linux/macOS computers, and you've heard of 3D scanning before).

No matter if you process it via the cloud, or locally, if you capture the data with photos or with structured light (LIDAR, Infrared FaceID), what you want to acquire is always points, LOTS AND LOTS OF POINTS. Let's help you capture the ideal dataset, whatever hardware you have.


Part 1: Android and iOS mobile photogrammetry

As we approach the end of 2021, Android phones and its main OS developper Google have not provided the same comprehensive object capture/ARkit as iOS and Apple. However Apple likes everything to be "walled in", so you've also got to process iOS captures...on a MacOS. Meh. 

Cloud 3D reconstruction changed the game as you can let someone else run energy-efficient mac mini M1 or one large Mac pro, and just sent the data and rent it as service. The introduction of Polycam Web allows any Android or quadcopter drone users to take advantage of this framework using "photo mode" and uploading the photos from outside iOS ecosystem.

Best Apps for iOS: Polycam, Trnio.

Best apps for Android: OpenCamera (1-2 second interval HQ jpeg capture with all filtering disabled)

How to acquire the ideal dataset: Since photogrammetry works based on tracking 2D features to create depth maps, you need some level of overlap in-between photos. This is exactly the same thing as "panorama mode" or panorama stitching, but in 3D this time. The other thing you want is sharpness since the 2D features cannot be tracked if there is the presence of motion blur or out of focus areas (difficult for scanning very small objects). Last but not least, you want coverage (seeing every side).

To achieve overlap, you want abut 20% of the frame to remain similar while you move or rotate, and make sure your object has plenty of grain or texture to capture.  You'll never be able to capture a chrome ball with today's algorithms since reflective surfaces are view-dependent, they change optically depending on your own position in each photos. If overlap is not achieved, you will find most 3D reconstruction software will fail to achieve a continuous result and start making wild guesses on how much your traveled physically in-between each picture. The easiest method to maximize both coverage and overlap is a low+high 2 pass orbit; You first rotate around the object from your highest height, holding the phone up and centering the desired object in the frame while rotating around it, then you do a second pass from a low angle to capture undersides. Optionally you can then break from the orbit and gradually get closer to a detail of the object you want to make sure to capture correctly. Remember: if you don't have coverage, for the computer it's an unknown blob. Even with the latest machine learning, reality is so rich and surprisingly complex that you'll find computers don't make great guesses yet, or rather not in 3D, (not yet). 

Side note: It's as much a philosophical debate as it is about algorithm design; filling the void reveals biases in the training data.

Finally, let's expand a bit on why sharpness matters, and how that intersects with lighting, sensor size, and ISO noise (the sensor sensitivity). A smartphone achieves photographic quality with a small sensor generally by doing all sorts of fancy computational tricks to make the picture look good despite the smaller sensor (this is why cameras that make a physical shutter sound are bigger and heavier and capture raw photos instead). Unfortunately the tricks that make pics look good for social media also make it a less-than-ideal candidate for photogrammetry. Your aim is to get as close as possible to a "straight out of the sensor" jpeg from your smartphone, because when a smartphone removes the ISO noise using an algorithm, it introduces a 2D error, which will propagate into a 3D error. Now you can imagine that any other computational photography tricks will also propagate errors. 

Example: Many smartphones introduced "night mode" which is achieved by shooting a burst of high-iso photos, and recombining 10 of them to eliminate the noise. Any imperceptible alignment errors in the 10 photos introduces errors too.

So, to capture the ideal dataset, we want to avoid any 2D filtering in the smartphone, and find a balance between shutter speed and iso noise. Shutter speed is the measure of how long you let light in, and a measure of movement-blur. Since you're rotating around your object, if you don't pause and click for each photo (tedious) you will see blur happen in any low light situation. The thing is, you're also dealing with a small sensor (doesn't capture lots of light) AND you usually want to scan on overcast days to achieve an "unlit" look that can make your scan look good in any new ray-traced/game lighting conditions.

Shutter speed is not something smartphone users think about much, but it's an obsession for cinematographers and photographers alike; people move, you move, and 1/100 is usually a minimum to achieve sharp results. Due to small sensor size, your smartphone will often drop as low as 1/30th of a second, so if you're doing the 2 pass orbit I recommended above,  you might see that the center object is sharp, but the background has a rotation blur. That's bad. 2D feature tracking for 3D reconstruction happens when it can recognize features, but the parallax  between your object to scan and the background is essential (parallax: how fast things are relatively moving, like when you're in a car or train and focus on a object with your eyes and follow it and the background suddenly seems to move counter to your vehicle direction). If the background is blurred, the algorithms will lack a ground plane/reference frame to guess your camera position in 3D space. So you have to boost iso to achieve 1/60 or higher shutter speed if you're scanning fast. Thankfully on iOS, Polycam and Trnio do this automatically, taking pictures at the ideal time when the blur is lowest, even with slow speeds it will only take a picture if sharp.

Ideally, someone could make an Android app that does this for dataset capture too! You don't want to waste time deleting blurry pics, let the smartphone's CPU decide when to take the picture by estimating your momentum using the velocity data from gyroscope sensor (lowest=better). 

In the meantime for Android users, you can decide if you prefer manually taking pictures, or using the 1-2S interval capture mode of OpenCamera app.

Get a feel for it! There's a sort of rhythm to starting-stopping to capture sharp pictures while scanning fast on this kind of interval. Using this method and a 2pass high/low orbit, you can scan a medium-sized rock or a tree-trunk in less than 3 minutes. 

Smartphones with ARM CPUs are amazing for acquiring datasets because they're lightweight and always in your pocket, but since the small sensor can get in the way, let's explore other options for taking 100's of photos with ease before we talk about how to do cloud and local 3D reconstruction.

My pick: used iphone SE II  with cracked screen (you're not paying for anything extra, this is a work device for me not a toy). Add tempered glass protection (like the previous owner should have done) and a silicone case, you're good to go!


Part 2: DSLRs, micro 4/3, or the cheapest sharpest cameras

The large the sensor size, the more light goes in, the higher the shutter speed you can achieve with lower noise. Noise prevents good 2D feature tracking and leads to 3D errors. So bigger sensor and higher aperture (small f1.7 type number) will lead to less errors and faster, more confident scanning.

Problem: DSLRs are super expensive. Solution: Due to smartphones being good enough for social media, plenty of people are selling Used micro 4/3rd cameras, who occupied a niche in sensor size in-between the heavyweight full-frame sensors (like a canon 5D MKII), APS-C, and the very small sensors of smartphones and point-and-shoot cameras (which are pretty useless now lol).

So that means you can get good used cameras with micro 4/3 sensors for decent prices. Cameras are extremely solid and micro-4/3 cameras often have a "silent shutter" mode, aka electronic shutter, where the mechanical curtain does not physically move, extending the operational lifetime of the camera by many many years (moving pieces = failure risk). Look for a camera with 10mpix or more resolution, good iso performance (measured in IQ, here's a good website to compare models). To achieve sharpness  and avoid motion-blur, a camera with in-body or lens stabilisation is ideal.

My pick: gx-85 with 12-35 kit lens (dual stabilisation and no AA filter).

Jpeg is usually enough, don't bother with RAW because it will just clog up your RAM if local or Wifi if doing cloud, unless you're using RAW to create "unlit" texture look by lifting up the shadows in something like lightroom or Affinity Photo batch mode, producing an "ideal HQ jpeg" using the RAW data.

Peak sharpness for most lens is halfway between highest and lowest f/aperture, usually f5.6 for small sensors because of optical diffraction effects. You want the highest f number below f8 that still gives you desired shutter speed above 1/60th of a second and tolerance for ISO noise (100 to 1600 ISO). Experiment with ISO and F number, you'll find for example that it's worth it for small object to boost the ISO to get a higher f number (smaller aperture) to ensure there is no depth of field blur (blurry background). This will ensure continuity. In other cases, the complete opposite is true, such as capturing large objects in low light, where a wide aperture (small f number) is ideal.

Generally, the wider the lens the better, but any barrel distortion ("Gopro effect") will introduce optical errors in reconstruction. I bought a 7.5mm fisheye lens and it proved itself to be a waste of money, since I don't have the patience for 2D undistortion or shoot in RAW and pre-process. Wide-angle, high aperture lenses with no barrel distortion are very expensive because they are optically complex objects with precise construction needs, (usually German or Japanese lenses). Investing in one of them could prove useful to scan faster if you're scanning for a business. Lenses are good investments, while camera bodies drop in value over time. Micro 4/3 mount like the one used by Panasonic is extremely popular and leads to a good lens selection. Lenses for full-frame sensors quickly get astronomically expensive and really heavy (it's a matter of geometry, you've got to have lots and lots of glass if you want high aperture ratio).


Part 3: LIDAR, FaceID, and other structured light hardware.

What if you somehow have access to something else than just capturing incoming photons? What if consumer hardware trying to make better AR happened to have similar structured light equipment as something that used to cost 30K usd a few decades ago? 

The basic principle: A portable device shoots structured light at the scene, and analyzes the way it bounces off the surface to generate the points instead of 2D tracking. This has a few pros and cons compared to pure photogrammetry:

Pros: 

- Works at night (no texture though) since your capture device becomes a light!

- Reconstruction can happen locally and at extremely low CPU cost. All it has to do is merge the various 2.5D slices you're sending it. Most devices now do realtime reconstruction. This means if you have a successful capture method, you could capture 100's of objects a day. Industrial-grade tech!

-Stability: it can capture people's faces even if they are moving slightly, since each 2.5D slice has lots of data, alignment noise is lower.

-Instant scanning of indoors walls! This is what it's designed for. Measure anything, AR-enhance your living room, etc.

Cons:

-Range: a major limitation is that the max range of structured light follows the inverse square law of light, where intensity and therefore precision has a quadratic drop-off as you get farther away (similar visually to "exponential" curve). This means scanning large/gigantic objects is out of question, as the max range is 3m for IR and 10-20m for LIDAR.

-Alignment errors: Due to range limitations and precision, you will find that you often don't have enough background information to track your movement correctly, leading to wild reconstruction errors where the model duplicates itself instead of recognizing that it's you the observer/camera that moved.

-LIDAR isn't super high res, especially on the geometry side, it's kind of blocky and disappointing. Most folks I talked to were disappointed with the promises of iPad LIDAR for example.

My pick: Any iphone with FaceID, Scandy pro App, and the lookout accessory.

The lookout accessory will allow you to mirror/flip the direction of IR scanning, not to recognize your face but instead to do full-body scanning of friends who are standing still, and small objects. Pretty cool!

Note: Scanning to Vertex color instead of texture allows for great flexibility to import it into NomadSculpt, giving you an offline, on-device pipeline for scanning and creative recombinations.

Part 4: Reconstruction energy/$cost considerations and trends

The Degrowth perspective is that we should all attempt to produce less devices, more energy efficient ones, and use less cloud, spend less time on energy-hungry desktops. How does this relate to all that I mentioned above? Let's imagine various scanning scenarios and make some assumptions:

Scenario 1: Phone scanning, cloud reconstruction (efficient cloud machines such as ARM cpus consume 30 watts per reconstruction node, but  inefficient network and contributes to ICT energy demand growth, which is all-too-often met via fossil fuels who should rather stay in the ground. Avoid 4G and 5G, prefer Wifi to reduce your carbon footprint and phone bill).

Scenario 2: IR local scanning (most efficient, with 5 watts iPad/Surface Pro realtime reconstruction on ARM CPU)

Scenario 3: DSLR and local reconstruction (Cameras consumes next to zero energy, but you are responsible for the energy efficiency of the reconstruction. If you're not careful you could end up using 250 watts for no good reason and reconstruct for hours and hours, unable to use your PC for anything else).

Recommendation: consider using as few photos as possible for cloud, using the most efficient ARM CPUs, or processing locally during the noon peak where solar panels or renewable energy is highest. Processing the scans at night means some nuclear power plant, batteries, or gas-powered thermal power plant has to run and use resources to power this use. The Degrowth perspective is that the excess generation of renewables is still very generous, and it's kind of SolarPunk to time your computing needs based on the Sun.

For large scans: (of over 100 photos), there is a diminishing return to resolution, and batch-resizing jpegs from 4K and above down to 2048px horizontal (using Affinity Photo for iPad) ensures the fastest upload/reconstruction time, while minimizing ICT footprint.

Best Cloud option: Polycam (subscription, but you get 2 free scans per month as a trial).

Local option: RealityCapture PPI (pay per input, you licence per megapixel of dataset, as low as $0.30 per scan).

Open source alternative: MeshRoom is a free 3D reconstruction software using the AliceVision nodal framework (depth map generation is super slow as of 2021 though and unless you're running this on ARM it's a big waste of energy). The Degrowth point of view here is that we need to swap the depth map generation with something much faster that still runs on ARM CPUs and isn't RAM hungry. If the issue of temporal coherence can be resolved, something like the BoostingMonocularDepth algorithm could be ideal.

Parting thoughts: Hope this article helped. Some of you might be wondering why I think blogs are still worth investing in, instead of making videos on youtube; It's about accessibility, portability, and ownership. I have this article saved offline and can publish it on a self-hosted device like my raspberry pi, it's screen readable so more inclusive than most Youtube videos, but also uses less data which is great for inclusion and degrowth. I prefer seeking the maximum amount of information in the least amount of data. Plus, it's easy to make a PDF guide out of it and send it to friends.

Happy scanning in 2022! Plan your path ahead, and watch out for slippery rocks.

Post-collapse futurism : phase I

Article / 03 June 2019

Before I get started, I would like to thank you all for following me along on this journey. I know that what I write is not necessarily comfortable or nice to hear, so I really appreciate the time you take to read this. 

At the time of writing this post, there are now 20,000 of you, thanks for the support!

Post-collapse Futurism : phase I

Around Christmas time last year, I tried formulating the problem by writing Your 2050 dystopia is weirdly optimistic, and invited artists to create art that better reflect the challenges of our time.

But before urging everyone to join this art movement, I had to try and define it better. It took 4 months of reading very depressing scientific papers about energy supply, solar radiation management side-effects, and the climate feedback loops before I could come up with any ideas worth painting.


One of the important steps in defining this project was to talk about what makes post-collapse different than post-apocalypse art. While the difference between the future I imagine and the one sold to us by silicon valley Billionaires is obvious, the difference between the collapse of civilization as we know it and a nuclear wasteland is harder to define.

I'm not going to attempt to create sharp razors for now, I think it's a fuzzy border, (Waterworld could fit in, for example). What I can think of however is a list of checks:
Does your collapse lead to more than 90% of the world's population to die? If yes, it's probably post-apocalypse, not post-collapse. Is the problem the same everywhere? Or is each region affected by different problems and reacting differently? etc.

This chart helped me a lot in defining what types of images I would or would not paint. I will not for example paint extreme sea level rises where the top of the buildings is underwater. I think it's been done really well before, and it's the middle-left of the futurism spectrum that is more interesting to me. Future pathways where it's not the apocalypse, but humanity is definitely not handling the involuntary de-industrialization well.

In a nutshell, my working definition was:

Post-collapse futurism is an art movement  focused on showing the policy and cultural failures of today manifested into the hardships of tomorrow.


It’s upon the completion of the third painting that the path forward cleared up a bit; With this type of artwork, I had the potential to start conversations that are usually uncomfortable or showcased as a partisan issue when explored by media outlets.
I think Cli-fi as a definition of sci-fi focused on climate change is a great start, but it looks like it's mostly confined to the writer spheres at the moment. I also think it's a bit limited in it's own brand, since the climate crisis is only one out of 10 different major challenges facing humanity in this century. If the climate crisis barely gets any coverage, It's even worse for the 10 others. For example, I could barely find just one good article about topsoil depletion outside of academia.

I realized that I also had the opportunity to plant the flag, to engage other artists in creating this kind of art, and lowering the barrier of entry between caring about climate change and having a finished piece.

Phase I is about planting that flag. I would say I’m just at the beginning of this phase but it’s already pretty promising. I'm lucky to have incredibly supportive friends and family who helped me trough depression and at the earliest stages of this project.

Here’s some of the things in the works for phase I:

  • Creating 10-20 post-collapse pieces (in progress).
  • Making a website to feature post-collapse artists and their art (I was trying to learn HTML and do it myself, but an artist reached out to help. Thanks a lot, Maxi! The website could come up online in 3-4 months).
  • Once the website is running, inviting other artists to send artworks that fit this theme of post-collapse, to build an art movement.
  • Writing a statement to go alongside the art, as well as a biography tailored for people not familiar with my work, or even concept art in general (a very supportive friend who is also an Editor helped me write it!).
  • Defining post-collapse futurism in a press release (I can start, but honestly it will be people, from artists to viewers to journalist, who will define what it is).


Artist statement

We often think of disasters as isolated, one time events: brief trials to be overcome, that humanity will bounce back from, stronger than before. Rarely do we imagine disasters or states of crisis to be our new norm. In my work, I explore what that existence looks like. One where there’s no disaster relief team coming to the rescue, because the slow global collapse of civilization is the disaster itself. Like time travelers, this series lets us take a peek into a very likely version of our future. If we don’t like what we see, we need to change our present, now.


Due to the inertia of the fossil fuel infrastructure the world relies on, our planet is guaranteed to increase temperature by another 3°C. Unless governments change their priorities or economic growth grinds to a halt, this future is already history.


If humanity’s “Plan A” is to solve our current challenges of global structural failures and systemic problems through infinite growth of technology and energy with no compromises, the intent of my work is to highlight our total lack of a “Plan B.”


My hope is that exposing the fragility of industrialized countries and painting their future in a state of post-collapse will increase our empathy for those suffering from disasters today and help people visualize what life looks like, not so very far now if we won’t strive for change together.

Time travel with me now, to the not so distant future, where we will visit our world as it’s currently on track to become.


Welcome to this historical retrospective of the late 21st century.


Short Biography

Efflam Mercier was born and raised along the coast of Brittany, France. After working on 3D animated films in Paris, Efflam moved to Los Angeles to design the fantastical, imaginary worlds of video games -- from dragons to shipwrecks, scavenger cultures to risen dynasties. Yet, however fun dreaming up the wild and thrilling landscapes of fictional, escapist worlds might be, Efflam’s heart has always been deeply concerned for this world. Our world. The world you can’t change with a brush or photoshop. The world that can’t be reimagined or rebooted if we don’t like what we see. The world we can only change if we all pull together. 


So he decided to paint a different kind of future -- not scifi, not fantasy -- but the very real future we all might face if things don’t change. This series is Efflam’s way of starting a conversation we all need to have now. There is much to fear about our future’s prospects. But there is much to take hope in, as well. That hope starts with each other. So let’s step into this version of our future together, take a look around and discuss if it’s the history we want to paint ourselves into.

Efflam usually paints digitally using open-source softwares, however he recently switched to traditional mediums like oil and acrylic, fearing that digital art will not be archived if industrial civilization collapses.



Thank you for reading this post. I hope the statement and biography helps explain why I'm doing this Painting series.
If you started following my art for the dragons and the knights doing knight stuff, I'm sorry to disappoint, but I will probably not paint any more in my free time for as long as the urgency of the climate crisis supersedes my desire to draw fun escapist things.
I found my calling with this project, and I'll pursue it no matter the personal cost.

The way I rationalized staying in the entertainment industry is that it will allow me to make no compromises and pull no punches when I do my personal paintings. On the other hand, if I went and did those paintings full-time, I would probably be biased over time to make art that is safer and more decorative in order to sell, and I would end up hating it. This is why you can still expect me to post fantasy and sci-fi art from the games and movies I work on, so stick around!

PS: Did you know The royal baby Archie got more press coverage in a week than the climate crisis did in the entire year of 2018? Let me know your thoughts below!

Your 2050 dystopia is weirdly optimistic

Article / 10 December 2018

TLDR: I urge everyone to join the conversation on how we can create art that more accurately reflect the challenges of our time.

Picture this: A sprawling megalopolis covered in smog. The faint glow of neon signs and giant LED screens displaying the latest advert for the latest high-tech drug. Constant aerial traffic of flying cars and a spaceship is now boarding for the moons of Jupiter. Police wearing full body exo-skeleton armor patrol the crowded, lively streets. While underground networks of ultra-libertarian hackers fight for the rights of the digital commons, religious sects try to outlaw consciousness transfers.

Sounds familiar? It's basically every other sci-fi world imagined after Blade Runner.

I'm allowed to talk crap about this image because I painted it.

Here's the thing, Artists have a huge role in influencing the subconscious narrative of humanity
we got into Art because it was fun, for some of us it's also how we make a living now. 

Problem 1: It's rooted in the (mostly) outdated challenges and imaginary of the 80's

Good sci-fi is usually a projection of human challenges and moral/philosophical dilemmas projected into the future. But did you ever notice how most old sci-fi looks very dated?

This comes from the fact that human imagination is usually locked by our surroundings. For example with this electric scrubber: mass produced mechanical parts were the new hot thing in 1900, so it makes sense that a "cleaning machine" would extrapolate based on this. What most people in 1900 couldn't imagine is that sucking air is way more efficient, but they couldn't think of it since there were not much pneumatic technology around the life of the average citizen.

So here's my problem with almost every concept artist (including me!) loving Blade Runner so much ; It's reinforcing an 80's imaginary of the future. Meaning it's the future, but viewed from 1968-1980.

I will separate the imaginary from the challenges.
The imaginary is my personal analysis of science-fiction from the 70's and 80's, while challenges are the historical accounts of challenges that the authors of science-fiction from 70's and 80's were having at the time. Note: all challenges are sourced at the end of the article using the [source number] tag.

Imaginary: 

  • "We went to the moon, now it's time to colonize the solar system"
  • "We're going to colonise other planets once earth is overpopulated"
  • "Flying cars are just around the corner"
  • "The use of robots are going to raise ethics questions very soon"
  • "The USSR will live on forever. The cold war is here to stay."
  • "Japan's economy is going to surpass the USA"
  • "science and industry will keep making more and more powerful machines"
  • "Humanity is the center of the economic universe" 

Challenges: 

  • Pollution at the city level was a major concern in the 1980's as car traffic increased and photochemical air pollution was getting worse.
    At the time of the making of cyberpunk dystopias like Blade Runner, air pollution was actually at it's peak in many cities. For example, air quality in Los Angeles slowly got better with the introduction of the clean act in 1970 and 1990 [1] 
  • In the 60's, the population growth rates of India and many other countries were absolutely out of control [2] , leading to widespread fears of overpopulation from the scientific community. In 1968 Paul R. Ehrlich, (a Stanford University biologist) published “The Population Bomb," an apocalyptic vision of an overpopulated earth and mass starvation. 
    You can see that the peak of the growth rate matches with the birth of the fear of overpopulation. I highly recommend reading NYT's article titled "The Unrealized Horrors of Population Explosion". We can safely assume that most fiction written around that time was influenced by this challenge.

    While air pollution is still a concern in large cities, it is a mostly understood and reversible phenomenon, and the ethics of robotics is still very much a philosophical debate rather than a software engineer one.

    Now let me attempt to define the imaginary and Challenges of 2019 onwards. This is no small task and of course my list is going to be incomplete, inaccurate, etc. This is more meant as a conversation starter to move towards a more up-to-date vision of the future.

  • Imaginary: 
    • "Humanity is fucked"
    • "We'll be fine, we'll go to Mars haha"
    • "Technological singularity is coming soon"
    • "Nuclear Fusion is coming soon"
    • "The economy can keep growing forever"
    • "This is all going to crash soon"
    • "Developing countries are going to provide 2/3 of the GDP growth by 2040"
      (Note : taken from an actual sustainable development investment journal)
    • "Renewable energy, yay!"
    • "Renewable energy is a leftist conspiracy"
    • "The scientists are going to save us all with some breakthrough technology"
    • "Dude, where's my flying car?"

  • Challenges: 

I'm going to focus on Energy, because most other problems are a result of this.




Remember those memes about graphic design/art? Where you can't get it all at the same time?



We basically have the same challenge, most of the world population think we can still have it all.

Look at the correlation between energy consumption and CO2 emissions:

According to a 2015 paper titled "Causality among Energy Consumption, CO2  Emission, Economic Growth and Trade" by P. Srinivasan  Et. Al. "the study detects one-way causation that exists from energy use to CO2  emission and trade" [3]

A one-way causation is a pretty big deal in Science; it means one thing directly causes another.

To simplify: energy used = CO2 emitted.


Wait, what about renewables?


Well first it's important to understand the difference between electricity and energy. Electricity is an energy in the form of a flow of charged electrons. Oil is an energy in the form of a high density fluid fuel that can be ignited, releasing heat and pressure.

Usually when we talk about renewables, we are actually talking about Biofuels, Biogas (organic matter to liquid fuel), or energy capture devices that transform mechanical energy into electricty: Photons to electrons (Solar PV), air flow to mechanical to electrons (wind)


So if you take a pie chart of electricity generation, it looks promising! Hydroelectric is at 17% for example. [4]

But as you take a step back, and you include all types of energy that are not in the form of electrons, you end up with this much more depressing chart:

And even worse, look at how renewable energy barely keeps up with the growth rate of fossil fuels:

To summarize:  if we care about the planet, we are running out of energy, if we care about energy, we are running out of planet, if we want carbon free energy, we should have started 100 years ago.



Okay, but what does all of this has to do with our cool dystopian sci-fi stuff?

If Sci-fi is a way to explore pressing issues by projecting them into the future, I believe we are collectively under-utilizing the medium.

Based on what we know today, the dystopias and utopias that we draw should look much different.

Don't get me wrong, spaceships and sprawling polluted megacities are very cool to paint. But I think if we care about science fiction as an art form, we should try to understand the world a little bit better.


To me, science-fiction is a what-if? engine, and what makes it so great is that you try to portray everything normally past the first what-if?

Then you develop your story around it and make it look cool, congratulation you made a sci-fi film.


So here's why your 2050 dystopia is weirdly optimistic:
If your dystopia focuses on a repressive totalitarian government in a super technological mega city you are assuming that we are going to solve the energy/climate dual problem. That in itself is already science fiction! So now it's not "what if x" It's "what if x AND we solved the climate/energy problem". Same thing with interplanetary travel, you are assuming that there will be a sufficient civilization and investment to support such an industry.

Physics and engineering today tells us that your mega city in 2050 is either:

Powered by coal and divided over the issue of what to do with the climate refugees, they camp outside of the city's makeshift walls, constantly - under the watch of the super armed state police.
Powered by coal and slowly sinking into the sea. Most of the poor people live on platform boards or use briges to cross over crumbling high rises while the rich live on the hills.
Powered by renewables but only the main systems of the city are operational, hunger drove most of the population out of the city and back to the farmlands. Buildings are abandoned, cars are stripped of their engines to power agricultural machines on rudimentary biofuels. buildings are stripped of their copper to make motors for home-made wind electricity generators. The few that stay in the city are organized in Organopónicos, a system of urban agriculture that was developed during the fuel shortage following the fall of the Soviet Union in Cuba.
Powered by nuclear but it's actually one of the last operational city on earth. The population is growing concerned about the supply of Uranium and Thorium, some even say the government is hiding the fact that there is only 10 years of fuel left as the rest of the world placed an embargo on Uranium.

All of these examples are world-building based on ONE of the challenges of our time, there are many ethical, social and technological challenges to explore. That said it's clear that energy is the cornerstone of civilization, so maybe we should link it.
• What are the ethics of climate accountability? Who is going to pay for the damages? Are hordes of hungry displaced civilians going to siege the fossil fuel billionaires's doomsday retreats and hunt down their yatch around the world's acid oceans?
• What are the social implications of an energy descent? How would cooperation triumph over egoism? Would a low energy world be more or less democratic? How would a woman living in France see her sister across the Atlantic Ocean once all fossil fuels are banned for civilian use? Would Sailing make a comeback? In that case, wouldn't piracy also make a comeback?
• How would small communities share and access knowledge trough technology? Will they repair phones and turn them into simplistic low-energy web servers? Will human-powered velomobiles deliver news from town to town on broken roads?

I've been researching the energy/climate problem extensively for the past few months, and let me tell you; there is no miracle solution.

I'm very concerned that our imaginative output as artists almost never reflect this impending energy descent.
I'm wondering if it's because few people are aware of the problem, or is it rather that we don't know how to portray it?
I think I'm in the latter category, I want to make art that reflect this post-carbon vision of the future, but I wanted to make sure to do my research first.

How do YOU imagine 2050 to look like? 
Permaculture Utopia? Thermonuclear weapons aimed at the biggest Co2 emitting countries? Desperate measures like dropping sulfuric acid in the atmosphere [5] gone wrong?

Let me know in the comments below!



Sources:

[1] Arthur Davidson  Photochemical oxidant air pollution: A historical perspective (Studies in Environmental Science Volume 72, 1998, Pages 393-405)

[2] Up to 2015; OurWorldInData series based on UN and HYDE, after :UN population division (2015) - Medium Variant projections 2015 to 2100

[3] Causality among Energy Consumption, CO2 Emission, Economic Growth and Trade, 2015

[4] The Shift Project Data Portal

[5] A Cheap and Easy Plan to Stop Global Warming

How I discovered my love for cinematography

Article / 16 July 2018

In 2013, last year of high-school, I was in a learning spree.

At the time, I wanted to become a 3D lighting artist. I was E-mailing industry professionals all the time, asking for advice, etc.
I didn't get replies all the time, but some artists really helped.

Benjamin Venancie is one them. He is a lead lighting artist at Dreamworks and I had just asked him something along the lines of:
"On an artistic level, how does one learn about lighting? Any books or methods to recommend?"

I'm going to try my best to paraphrase and translate the answer, in a way that can help others develop their taste for cinematography.


  1. Photography: 
    Being a good lighter, it's first of all about a global understand of images, not just the light but also the composition and everything that has to do with the camera. A good way to understand the basics is to practice and study photography. It enables you to "train" your eye and taste, meaning when it's time to make a call, you can make good choices for the lighting.

    Recommended readings: The negative, The camera, and The print by Ansel Adams. (Note: very long and technical, but it covers the fundamentals of photography). Photographing Shadow and Light by Joey L. (Behind the scenes and lighting positions diagrams)
    Additional links: Guess the lighting, this website describes the lighting positions diagrams of fashion and editorial photographs.
    LightFilmSchool channel on YouTube to know more about light placements for film.

  2. Films:
    Benjamin gave me a list of movies that impressed him from a cinematography standpoint.
    I will now list every one of these movies and what I learned, how my taste evolved from watching them.

    Barry Lyndon (1975) directed by Stanley Kubrick.

    What I learned:
    Practical vs Natural
    From a cinematography standpoint, the interesting part of this movie is that it was entirely shot using natural light.
    You see usually when there's an interior shot, and say people are gathered around a table, on the table is a lamp.
    That lamp is called a "practical", but most of the time is just there as a "motivator" (reason why) that justifies the existence of a huge fresnel lamp off camera, pointed at the actor's faces. The reason why this is done is that the practical lamps often don't generate enough light for the sensitivity of the camera.
    What that does, is that any interior shot before recent groundbreaking ISO sensors has been "faked" with various levels of success.
    Compare two master of their craft Stanley Kubrick (and Larry Smith), and Roger Deakins.


    In this shot from the movie Skyfall, you can see Deakins uses the restaurant's table lamp as a practical to justify the lighting on the actress.
    A common practice is that the lamp should not be blown out white, as it's considered ugly. From a natural lighting perspective, this shot is unrealistic though, as the light would have to be blow out to light the actress that much. My guess would be that there is a diffuser hidden under the table, the angle and softness is slightly off. The image looks pleasing, but you know you are looking at a movie.

    Instead in this shot from Eyes wide shut , the practical is the sole light source on the actor's face. It is blow out to white because of the intensity, you can hardly see the actor... But doesn't it feel much closer to the feeling of being inside of a busy restaurant with Christmas lights?

    Bravo for practical lighting! If you want more information on Kubrick's use of practical lighting, check out this excellent video.
    (I think both options are perfectly valid, but as artists, we should know when we are breaking the laws of physics, and what we are trying to achieve by doing it).
    I will now post more of my favorite shots from Barry Lyndon:


      


      


      


      

    One of the reason I think the choice to go all-natural for the lighting of a period film, is that it feels familiar due to the fact that the classical painters had no other tools than the the sun, the sky, windows, and candles to complete their masterworks.
    Bonus : It's kind of obvious, but ominous skies as a foreshadowing device is really effective.

    In the Mood for Love (2001) by director Wong Kar-wai


    What I learned: Poetry within Chaos
    Christopher Doyle has to work fast. The productions are low budget, most of them shot on real world locations and in tiny cramped apartments. The director has no script and decides what to shoot right on the spot. Whoever thrives in this environment has to acquire a taste, an eye that can detect beauty within urban concrete jungles and neon lights.

    I really did not expect to like this film.
    In the following shots I want you to pay close attention to 1) the unexpected color choices, 2) the use of frames, windows, mirrors and pure black to create negative spaces.




    Now let's contrast this poetry by making a 180 degree turn to look at another movie with Christopher Doyle as DOP

    Hero (2002), directed by Zhang Yimou.


      


      


      


      


      



      



      



      



      

    What I learned:
    Simplicity is key in composition; Central compositions and symmetrical designs make the visuals stronger.
    Go bold with color. If you stick with mostly earthy/desaturated tones, reintroducing a single color color at a time makes for very bold images.

    Skyfall (2012) directed by Sam Mendes. Roger Deakins as DOP.

    What I learned:
    Silhouettes silhouettes silhouettes.
    Leading lines.
    Selectively lighting a part of an actor's face.


      






    The Fall (2006) directed by Tarsem Singh. Colin Watkinson as DOP.

    What I learned:
    The Fairy Tale Aesthetic
    Transitions (as seen below from a butterfly to an island)



     




  

What's really interesting in this movie is the juxtaposition of seemingly disparate elements to highlight the creativity of the young girl who wants her stay in how the fairy tale unfolds. Beautiful movie, I highly recommend.

On Benjamin's movie list was also Blade Runner, but I think we don't need any more Blade Runner inspired concept art these days :D
That's why I refuse to show it, however cool it looks like!

Overall, watching these movies and paying attention to the craft of cinematography set me on a watching spree, studying many other films and absorbing on-set dvds about lighting. I think you need to have watched a lot of good films to know what your taste in films is.
The same goes for painting, drawing etc.

I think I also need to stop there because I'm not sure the Arstation Blog feature was meant to handle 100 images.
I will leave you with the short film The bloody olive, I think the exaggerated lighting effects make it a great case study.
As a final note, I would like to thank Benjamin, and all the people who take the time to reply to emails to help people out.

What's your favorite film or short from a cinematography standpoint? Share in the comments below!