Complexity of the algorithm is obvious at a glance and has no need for a test nor being in another language. The nested loop is all you need to know.
Details of what algorithmic complexity is and how it is calculated is most likely covered in one of the 300- or 400-level courses. Best, worst, and amortized / average complexity are typically important. The course should also cover how algorithmic complexity and runtime are only loosely related and definitely not transferable. Often they're also asymptotic, how they behave with extremely large collections or long sequences.
Many optimizations use an algorithm with different, often higher algorithmic complexity because the additional work in one area is faster than slower but less compute complex work in another. An easy example, for four thousand integers a O(n) linear search can be faster than a O(log n) binary search. The set is nowhere near large enough for asymptotic behavior. Scanning an average of 2000 integers directly that leverages the caches is faster than 12 random access cache misses. Either way you're talking on the order of nanoseconds, but more processing is faster than fewer tests. The tradeoff point is implementation specific and hardware specific. The fastest known sorting routines combine and switch between several high-complexity algorithms. Each individual one has terrible complexity and pathological worst case for each would be extremely slow, but by making some careful choices what is done will be blazing fast. IIRC they are all O(n^2) complexity yet the typical runtime is log n.