Advertisement

Euclideon Geoverse - Latest Video Calms More Critics

Started by October 14, 2013 05:15 AM
29 comments, last by laztrezort 11 years ago

There are algorithms that can convert even regular camera shots to 3D data (some algorithms are from the 60's or 70's). The key is that you don't even have to know the "depth maps" of the images.

If I recall correctly, it's possible to reconstruct the 3D even without knowing the exact positions of the cameras. The only constraint is that the scene has to be lit exactly the same when taking the pictures (so you have to use multiple cameras shooting at the same time), so that the algorithm can detect (with some magic, or maybe with human guidance?) the positions of the same point in all images (if not obscured, obviously).

This is no magic, the human depth perception works similarly with only image data (no depth) from two viewpoints.

They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.

GPS on the scanner, maybe? We can get it down to within a few centimeters of accuracy now (with extra calculations. Raw GPS accuracy is still measured in meters). Then get the relative position of the points by calculating the amount of time it takes for the rays to bounce back (or, as Wikipedia puts it: "measures distance by illuminating a target with a laser and analyzing the reflected light.").

Or, have your own local "GPS", by triangulating the position of the scanner against two or more stationary positions (like a parked truck with computer equipment in it, or local cell phone towers). Just hypothesizing.

Eh, GPS is like 15 meters, no matter what (except if you're military, then it's slightly better).

Though EGNOS (more or less identical to WAAS) claims 7 meters, I've never seen any better than 10 meters, and although differential GPS claims 10cm, I've never seen anyhting better than 2 meters (owning devices capable of both).

I'm inclined to believe this data is just a smoke and mirrors show. They probably got a reasonably high-res sattelite scan from somewhere, just large enough area so the demo looks stunning, and then they spent 6 months and a team of 20 artists filling in the detail (in the selected areas where the camera zooms close by).

Advertisement
If you've got multiple 2d images with shared features, you can stitch them into a panorama. Same with overlapping 3d scans -- no need for GPS, just line up the shared details (either automatically, or by hand).

And yes, collection of photos / video walk though -> point cloud is a solved problem.
Once we were sent footage of an industrial area and told to recreate the footage as an in-game cutscenes - we started by converting the footage to a point cloud of that area, before modeling the buildings... And that was with cheap software and unannotated footage.
If you know the motion of the camera and have a bigger budget, you can do much better. See http://en.m.wikipedia.org/wiki/Structure_from_motion

I'm inclined to believe this data is just a smoke and mirrors show. They probably got a reasonably high-res sattelite scan from somewhere, just large enough area so the demo looks stunning, and then they spent 6 months and a team of 20 artists filling in the detail (in the selected areas where the camera zooms close by).

but their product is about data visualization, not data acquisition. They're not even showing off their own data, theyre using client's data (which yes, could be acquired in many ways). To supplement an arial scan it would be much cheaper for that client to drive a van around the city for a few days, than to hire artists...

They're selling it to people who have already acquired data, or people in the acquisition business. There's a lot of companies in the GIS business who will chare you big bucks to deliver exactly this kind of data. They're not the ones on trial here ;-P
The company showing off the detailed 3D construction site is unrelated to Euclideon. They already offer this service, and that visualization is using triangulated meshes, not Unlimited Detail. There's no point in the pretending they can do change detection with faking-via-artists if they can't really capture that data -- it's just a lawsuit waiting to happen when they can't deliver on contractural obligations. One things for sure though - these kinds of services aren't cheap!

LIDAR flyovers can do sub-metre resoluton, but military imaging radar can produce point clouds with a resolution of about 2mm, which is enough to be able to analyze specific tyre tracks in dirt, or do change detection on said dirt to tell which tank moved where by following the tracks (I worked on a simulator for them once) And they put them on high altitude flights, satellites and the space shuttle for fun...

Hodgman, AeroMetrex doesn't only use meshes for that tool they built. Also, it isn't LiDAR, they use Photogrammetry techniques.

AeroMetrex built that mesh tool to correct any irregularities with the data-set. Making sure that when converted to Euclideons format, it's an entirely clean mesh. That's why AeroPro 3D gives them a ton of flexibility compared to most solutions. For those that are curious how data-sets like this are done, here's a couple videos.

The first video shows a company scanning the Himalayas and creating one large dense point cloud using thousands of pictures.

The second video shows off a product called PhotoScan by the company Agisoft. It's a photography to 3D data-set converter. I used this company as an example because their product ranges from landscape/surveying all the way to smaller, individual assets. And also because of how it segues into the 3rd example.

The third video is the Fox Engine Demo from GDC. They are actually using Agisoft to create scanned images. If you jump to the 27:30 mark in the video, that's when they start discussing PhotoScan. They give a few examples and show a smallish type glimpse into how they're approaching this.

Kojima Studios is obviously having to retopologize the 3D model once they convert to polygons, but it gives a clear indication of what's possible if there isn't a polygon count or limitation on texture resolution. They also show how quickly re-texturing an asset would be, from individual pictures.

These new videos look pretty good, and they seem to have actually found an application for their technology. I say good for them.

But with the way they tried to pitch their product in the past with ridiculous claims (Unlimited detail! Polygons are dead!), they have lost all credibility for me.

To me it just looks like a company trying to find gullible investors to make cash as quickly as possible.

They usually scan from various directions and combine the data to eliminate any blind spots. How they are able to determine where the points are in 3D space after moving around with the scanner is beyond me, so I'm just going to assume it's magic.

GPS on the scanner, maybe? We can get it down to within a few centimeters of accuracy now (with extra calculations. Raw GPS accuracy is still measured in meters). Then get the relative position of the points by calculating the amount of time it takes for the rays to bounce back (or, as Wikipedia puts it: "measures distance by illuminating a target with a laser and analyzing the reflected light.").

Or, have your own local "GPS", by triangulating the position of the scanner against two or more stationary positions (like a parked truck with computer equipment in it, or local cell phone towers). Just hypothesizing.

Eh, GPS is like 15 meters, no matter what (except if you're military, then it's slightly better).

Though EGNOS (more or less identical to WAAS) claims 7 meters, I've never seen any better than 10 meters, and although differential GPS claims 10cm, I've never seen anyhting better than 2 meters (owning devices capable of both).

I'm inclined to believe this data is just a smoke and mirrors show. They probably got a reasonably high-res sattelite scan from somewhere, just large enough area so the demo looks stunning, and then they spent 6 months and a team of 20 artists filling in the detail (in the selected areas where the camera zooms close by).

Off-topic, but for anyone interested:

There are "tricks" where you can get reliable and repeatable GPS accuracy to a tenth of a cm, basically in real-time, in 3D. This usually involves receiving a correction signal through cellular or radio. Keep in mind, though, that these are not the same as the GPS receivers in phones - they are far more expensive and bulky.

For scanners and the software, while I've never had a chance to use one (yet), I have seen some demonstrations and sales pitches. One of the selling points of scanning software seems to be the ability for it to automatically stitch together (and weed out a huge amount of redundant ) data. A vast amount of data is actually useless - e.g., LIDAR fly-overs shoot a massive amount of points over foliage, and the software removes everything that hits the canopy, keeping only the shots that happen to make it through to the ground.

Also, in the demonstration I saw, took color photos of each scene and projected those images onto the produced mesh to create a fairly convincing colorized model of a scene.

This topic is closed to new replies.

Advertisement