You may already know some of the things I'm going to mention here, but these are just the first thoughts that come to mind.
Quote
...I would eventually like to do cases where there may be small tolerances between the edges and yet still be considered shared.
I'm assuming you actually want to correct these small distances between vertex positions, rather than leaving them as is while still considering them coincident. If so, for that you'd probably want to run a 'welding' step on the mesh first to eliminate such errors, and then move on to topological analysis with the assumption that shared positions match exactly.
For detecting shared positions, one option is to use a vector->integer map, where the vectors are vertex positions and the integers are numeric IDs. Assuming a sorted map (e.g. std::map rather than std::unordered_map in C++), you can use lexicographical comparison on x, y, and z for the comparison function. You can then associate a unique integer ID with each unique position.
I'm just winging it here, but the whole process might look something like this:
for each vertex
if position is not already in the map
add it to the map with a new integer ID
for each triangle
for each vertex
get the ID for the position and store it as part of the vertex description
At this point you now have vertex 'indices' that you can use, just as if all vertices were entirely shared to begin with.
Then:
for each triangle
for each edge
for each triangle
for each edge
check to see if the edges are shared based on vertex IDs
Obviously you can optimize that last set of loops to avoid checking triangle pairs redundantly. Also, when checking for shared edges, keep in mind that, assuming consistent winding, the indices for two coincident edges will be in opposite order with respect to one another.
Note that some of the intermediate steps described here, like the position->ID map and storing IDs as part of the vertex descriptions, aren't strictly necessary, but I think they provide some conceptual clarity, and they should make the process more performant.
Maybe this is all too obvious and I'm not telling you anything you don't know, but you can always ask further questions if I'm off base here. Also, the above examples and pseudocode are all off the top of my head - it's been a while since I've implemented these sorts of algorithms, so I may have missed something or gotten something wrong somewhere. If so, maybe someone else will jump in with corrections.