Big data is relative. How much data and what types of data is required for it to become "big"?
Wikipedia uses a definition of basically "too big for a normal RDBMS system to handle efficiently".
I can imagine some types of data that reaches that database limits with a few gigabytes or even less. If you need to process statistical data, streams of data, or other types of data that a "select ... from ... where ..." style query doesn't naturally fit you are going to reach "big data" quickly.
Video processing, for example, is a horrible fit for a relational database. You might store a few index values in a database along with meta-information about the raw data stream, but that raw data is best process with specialized tools. Similar with statistical processing where you must actually load and evaluate a large amount of data. These would very quickly reach Wikipedia's definition.
I can imagine datasets reaching into the terabytes and potentially even into exabytes that are still a great fit for a relational database. As long as it remains a system of direct lookup and retrieval the various off-the-shelf systems remain viable.
For example, you could build a Google maps style database, but continue the resolution up to something akin to a spy satellite. Not just today's commodity spy satellites, but imagine some of the most powerful deep space telescopes pointed toward the Earth instead of away. After all the slicing and dicing of data the system just becomes a direct lookup of individual blocks of data. Even though the data set is extremely large the data can be organized in a way that simple lookup remains efficient.