I was scanning some headlines on space-forward AI development today and found this interesting:
NVIDIA revealed a “Vera Rubin Space-1” module designed to run AI directly on satellites, processing data in orbit instead of sending it back to Earth. That introduces new constraints. In space, cooling doesn’t work the same way, so even running a GPU becomes a design challenge. While others like Google and SpaceX are exploring data centers in orbit, NVIDIA is pushing compute directly to the edge, wherever the data already exists. (ref)
This was also fascinating, from Google:
If AI is a foundational general-purpose technology, we should anticipate that demand for AI compute — and energy — will continue to grow. The Sun is by far the largest energy source in our solar system, and thus it warrants consideration how future AI infrastructure could most efficiently tap into that power. (ref)
Philosophically, all of this makes me wonder if some of the challenges this industry are trying to solve right now – power, cooling, throughput – are all the wrong things!
If the alternative is floating asteroid-sized datacenter in space, perhaps we’re reaching too far ahead of Moore’s Law with our current ambitions. Maybe we need to find non-electrical methods for storing and processing information instead.
Also makes me wonder if I really need to worry about any of this.
The business ambition is driving the science faster and faster, which is fascinating, but I want to focus even more on the humanities that all of this needs to support on the other end.
“Bring it back down to Earth” as my Father would say.
