One can process details/data/ whatever in time. Word unlimited means what? Size, or inlimited processing time reciprocal? Accounting they have reduced the geometry definition into a "half-picture", I would rather engourage myself in using 3d cube models with procedural details, if I went down the road of getting lost vertex attributes and 3d space definiton. Why don't they show an interactive animated human model, but stalks as with static stills? Did anyone of you who says they are serious run their demo or anything on their machine, with at least enough level of interaction, to prove real time processing?
Euclideon Geoverse - Latest Video Calms More Critics
..Open Maya and load a primitive cube. Then load a second primitive cube, but this time, subdivide the cube until it's 10M Polygons. Euclideon has stated years ago, that their conversion technology converts at a rate of 64 Points Per Cubic MM.
Well then, it's not "Unlimited" Detail, it's "64 Points Per Cubic MM" Detail, isn't it? I can't start from that flythrough in their video and zoom in all the way down to a guy's individual skin cell, can I? Because, with "Unlimited" Detail, I would be able to do just that, and even more, being, you know, *unlimited* and all
Which is perfectly fine, nobody realistically expects any contemporary technology to be able to do such feats; all people are saying is for them to drop the bullshit marketing slang which puts tech-savvy people off from the get go.
They state there are "64 Points Per Cubic MM".
They state the world is "a few cubic kilometers"
Even with the assumption that only 0.1% of world space is filled with voxels, that's 64'000'000'000'000'000 (64 quadrillion) voxels. Each needs at least a position (3 x 4bytes) and a colour (3bytes), = 960'000'000'000'000'000 (960 quadrillion) bytes.
I'm sorry, but even with a compression ratio of 17%, you cannot fit 163 Peta-bytes of information in RAM, let alone on the hard drive.
compression rate of 17%
I only watched it with one eye as it really isn't all that interesting, but from what I understood, they claim 17% on a lossless compression. Lossless compression that compresses in the same ballpark as lossy image compressors. That claim alone would be enough to fill a thread. Where be tha Nobel Prize for these guys?
But seriously, unlimited detail is not, and has never been something special. Fractals are unlimited details, and we had them for 40 or so years. Not in realtime, admittedly, but that was on computers which had a billion times fewer FLOPS too. Realtime fractals have been reality for at least a decade now.
Unlimited detail as such isn't interesting, though. What's interesting is meaningful, dynamic, interactive unlimited detail. And they still fail to provide anything remotely close to that from what I can tell.
With that said, the patent office turned down my perpetuum mobile application this week, too. I wouldn't know why...
But seriously, unlimited detail is not, and has never been something special. Fractals are unlimited details, and we had them for 40 or so years. Not in realtime, admittedly, but that was on computers which had a billion times fewer FLOPS too. Realtime fractals have been reality for at least a decade now.
You're missing the point a little, I feel. Fractals are "unlimited" because, quite to the contrary of what Euclidean is claiming, they can be procedurally generated and don't have high memory requirements.
In their new geoverse demos, I'm sure the tech is still capable of using 64p/mm^3 res data, but an ariel laser scan does not provide that resolution (not even military SAR scans do)... So those demos obviously are not storing that kind of hi rez data.
Their compressor also relies on the same trick of quantizing and palletizing the data, then extracting and instancing repeated patterns.
Where do they make the claim that their converter is lossless??
. 22 Racing Series .
They state there are "64 Points Per Cubic MM".
They state the world is "a few cubic kilometers"
Even with the assumption that only 0.1% of world space is filled with voxels, that's 64'000'000'000'000'000 (64 quadrillion) voxels. Each needs at least a position (3 x 4bytes) and a colour (3bytes), = 960'000'000'000'000'000 (960 quadrillion) bytes.
I'm sorry, but even with a compression ratio of 17%, you cannot fit 163 Peta-bytes of information in RAM, let alone on the hard drive.
1. It isn't Voxels. It's Point Cloud.
2. The world they showed was "1 square kilometer". A cubic kilometer would mean the world is 1 kilometer deep. What is the point of converting Point Cloud 3,000 feet below the surface, which nobody would ever see, while completely wasting storage space at the same time?
3. Converting based on 64 Points per mm^3 does not mean, the outputed asset will actually have 64 Points per mm^3. Polygon models are surface based and are hollow inside. Point Cloud is also surface based data. You wouldn't convert the empty space inside of an object. Euclideon essentially needs to break the object into a divisible structure.
In other words, if you converted a 1 cubic meter polygon box, that would "technically" equate to 64B Points. Thing is, based on my math, converting only the surface data of that box would equate to only 96M Points. Which is actually 1/667th the size of what you're saying would be converted.
4. AeroMetrex is a geospatial company and from what I understand, they are the first company to license Euclideons technology.
In the video I linked to, they comment about the size of the data-set. They state specifically, the Polygon version of that entire data-set is 15GB. The converted Point Cloud data-set is 3GB. 20% of original file size, which, coincidentally was done almost a year ago. Compaction has increased and continues to increase according to multiple companies who have been licensing the technology.
Furthermore, that entire data-set is roughly a square kilometer. And not only is that a square kilometer, there is NO REPEATED GEOMETRY. That is an entirely unique Point Cloud Data-Set. No trees are duplicated, no cars are reused and every inch of asphalt is converted Point Cloud.
They also stated the entire data-set is made up of roughly 100 Million Polygons and when converted, it's roughly 40 Billion Point Cloud. In this instance, the entire data-set is one large piece of geometry. And from what has been revealed, that actually helps Euclideon compact more and reduce file size for massive data-sets.
So if a company can get a square kilometer of unique Point Cloud in 3GBs, imagine if you built a forest with repeatable cubes of dirt. Or reusable trees. Grass and Leaves being reused. File sizes would drop dramatically and never reach unsustainable storage levels.
5. Euclideon's technology does not load Data-Sets into RAM. When you convert from Polygons to Point Cloud, it's converted into Euclideons .UDS file type. It's a zip file that indexes the data in a very specific way. Euclideons algorithm then searches the hard-drive (the zip files), finds as many points as is needed for the resolution and temporarily sends them to RAM to be computed. Some have hypothesized that it's essentially a cache for the data to be rendered. Using a zip file would allow the algorithm to view the content without having to extract the entire data-set.
Companies have confirmed it doesn't load the entire data-set into RAM.
Companies have confirmed it doesn't load the entire data-set into RAM.
Obviously :D
A bit off topic: how do these people even get those high quality 3d scans of the world. Even the inside of that half-build building was visible?
@ Outthink The Room - It was preemptive on my part to post without first researching the subject a little more. I've changed my opinion on the plausibility of Euclidean's software, and think it is very possible to do what they claim, given the resources they're working with.
A bit off topic: how do these people even get those high quality 3d scans of the world. Even the inside of that half-build building was visible?
http://en.wikipedia.org/wiki/Lidar
They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.
They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.
GPS on the scanner, maybe? We can get it down to within a few centimeters of accuracy now (with extra calculations. Raw GPS accuracy is still measured in meters). Then get the relative position of the points by calculating the amount of time it takes for the rays to bounce back (or, as Wikipedia puts it: "measures distance by illuminating a target with a laser and analyzing the reflected light.").
Or, have your own local "GPS", by triangulating the position of the scanner against two or more stationary positions (like a parked truck with computer equipment in it, or local cell phone towers). Just hypothesizing.