this reminds me of Polyworld by Larry Yaeger, an artifical life sim where each creature has a vision system. i played around with this back in the early 2000s though the hardware i had access to was basically insufficient to run it in any real way. it's nice to see its development has continued.
Haven't come across Polyworld before — just looked it up and its super cool, especially for 1994. The vision system is a interesting design choice. Werld takes a different approach — graph topology instead of a 2D plane, and NEAT brains instead of Hebbian learning — but the core philosophy is the same.
And yeah hardware has caught up a bit since the early 2000s, though my hard drive is having a hard time. Thanks for the reference, going to dig into Yaeger's papers.
wonder if the black mirror episode was based on polyworld then?
i made some art on this site years ago. some people used this to make plottable art. plotting it is definitely a slower way to watch it work through a drawing :)
The main benefit I see is being able to more accurately represent different light sources. This applies to transmission but also reflectance.
sRGB and P3, what most displays show, by definition use the D65 illuminant, which approximates "midday sunlight in northern europe." So, when you render something indoors, either you are changing the RGB of the materials or the emissive RGB of the light source, or tonemapping the result, all of which can approximate other light sources to some extent. Spectral rendering allows you to better approximate these other light sources.
Whether the benefit is light sources or transparency or reflectance depends on your goals and on what spectral data you use. The article’s right that anything with spiky spectral power distributions is where spectral rendering can help.
> sRGB and P3, what most displays show, by definition use the D65 illuminant
I feel like that’s a potentially confusing statement in this context since it has no bearing on what kind of lights you use when rendering, nor on how well spectral rendering vs 3-channel rendering represents colors. D65 whitepoint is used for normalization/calibration of those color spaces, and doesn’t say anything about your scene light sources nor affect their spectra.
I’ve written a spectral path tracer and find it somewhat hard to justify the extra complexity and cost most of the time, but there are definitely cases where it matters and it’s useful. Also there’s probably more physically spectral data available now than when I was playing with it. I’m sure you’re aware and this is what you meant, but might be worth mentioning that it’s the interaction of multiple spectra that matters when doing spectral rendering. For example, it doesn’t do anything for the rendered color of a light source itself (when viewed directly), it only matters when the light is reflected or transmitted through spectra that are different from the light source, that’s where wavelength sampling will give you a different result than a 3-channel approximation.
i would expect the more dense part to be the smaller gamut that can be made with paint since we've been naming those colors for a lot longer than the larger gamut that can be made with a screen. The paint/print gamut looks kinda like the more dense parts of these scatter plots within the larger sRGB cube (though the paint gamut isn't entirely contained within sRGB).
https://en.wikipedia.org/wiki/Polyworld
reply