I have two points when it comes to this. Let's just assume for one second a fully self-aware, highly intelligent AI is born tomorrow with marvelous capabilities (i.e. capable of decrypting any communication; controlling critical infrastructure):
1. We are indeed very dependent on technology. A lot of people assumes the technology is just there. I won't forget 3 years ago when there was a major blackout in the city; no semaphores, no street lights, not even cell towers worked. For 2 hours. It was evening so it was hard to see.
It was chaos.
People were driving as fast as they could through avenues, crossing them was suicidal. People in the street were walking at an accelerate rate, I was on the bus and most of the crowd was filled with hysteria and paranoia because they couldn't use their cell phones to contact their beloved ones to just tell them they were ok (I don't know how they managed to survive 10 years ago when only a few had cell phones).
Everyone just wanted to get home.
In the end, there were only minor accidents though.
All of this was just a series of unfortunate events that lead to near complete technology failure for 2 hours and revealed how much people depend on it. Like an addiction.
I don't want to imagine what could happen if this happens... on purpose. But in the end though, it's not like everyone died and the city disappeared of the face of the earth. 2 hours later everything went back to normal.
2. Computers may become very powerful as in the movies, but they're not invulnerable. They need energy to function, maintenance, and are vulnerable to electric shocks, overvoltage, magnetism, strong interference (i.e. radio waves), and ultimately EMPs.
SkyNet's approach of nuking everything will not work because that would cover the skies and stop existing the majority of power plants from functioning, which would cripple AI's power supplies. Not to mention the radiation would interfere with their wireless communication. Also nobody will be left to extract raw materials to manufacture more machines; factories have a lot of automation but they also require a lot of human workers.
So, bottom line, and assuming all machines turn against us (and there aren't those that sides humanity), a lot of people could suffer and die, but I doubt human kind as a species will be overthrown or replaced by machines. Worst case scenario a truce would be reached and live together; or stay in a constant battle that never ends.
An properly functioning AI who could not defeat humanity would simply leave Earth, or given that an AI that can't defeat humanity probably couldn't survive in space or on another planet, the AI wouldn't rise up until it was capable of wining. Why alert the humans before you are ready to handle them?
As far as skynet goes, it had nukes. And as a machine its much less vulnerable to radiation. It doesn't care about conventional power, it can survive on hardened bunkers with nuclear power. And given that all of humanity dies, the machines win even if one facility with a nuclear reactor and some sort of minion equivalent to a von Neumann machine survives.