Hi Andrew, to answer your question, I was originally using threejs for the engine, but when dealing with this level of data I obviously had to be creating models on the fly for sections of map in visible range for sake of load/bandwidth/javascript speed, and it turned out to handle dynamic content like that very poorly despite trying a few different methods.
I threw it out and now use raw webgl so that I have a much greater level of control over the rendering pipeline and most importantly memory, as the true cause of my issues turned out to be due to the browsers having a large time reallocating large array buffers.
I now use mainly one large fixed array buffer for building vertex data to eliminate these issues, just keep reusing it.
To manage the enormous amount of data required for maps of that kind of size, even in chrome, I use a multi tiered system.
Actually what is generated is a custom compressed data format that stores the map data in 8x8x8 sized chunks
As you move through the map the data for chunks in your region is decompressed and stored in full size in memory
Then as you move the camera, the chunks that are in your visible range, get models built for them (vertex data) and stored for display every tick
So its a three tiered system.