That's a good point, I'm sure I've made that mistake myself, and I know I've seen it made numerous times in articles and forum threads alike. I seems like a word that might mean what they meant, but its not. I guess something like cache locality is more apt, or perhaps temporal locality more generally.
I think I've heard people use the term "coherent access's" maybe it stems from that?
That is a related topic, yes.
Understanding exactly how memory works in modern computers is a fairly big topic.
Decades ago it was easy enough: You had main memory and you had a processor. There was no cache. If you needed something it was fetched from memory.
Then they added a cache. Early caches were a few bytes. Then 16 bytes, then 32 bytes, then bigger and bigger.
Then there were more levels of caches. You could buy dedicated external cache that logically sat between your real main memory and your CPU.
Then it became popular to add a second CPU. And more cache chips. Cache had both CPU-integrated and external chips.
Then each CPU gained additional levels of cache that needed to be kept coherent.
These days you have multiple levels of cache feeding in to potentially multiple physical processors that all have their own caches, feeding into potentially multiple virtual processors that also potentially have caches. Then inside the processor all the instructions are broken down and reordered anyway, and the CPU will predict where your memory accesses will be so it can prefetch them before the instructions are fully decoded. Modern hardware does lots of amazing things.
Any time you modify something somewhere all the other caches that know about the value need to be updated to hold the right value. If you update processor 3's data cache for a memory address it needs to eventually go out to any other processors and caches that also have that memory address.
Good data-oriented design means understanding where all the copies of the object are, trying to minimize copies so ideally it is a single chain directly from main memory and not copies spread to every processor, and trying to ensure data is always available in the correct cache when you need it. In addition to solid software development background and understanding data structures and algorithms, that also means a good understanding of physical hardware and the hardware configurations that seem to change a little with every new hardware generation.