Program it to follow expectations (this is "free" in the sense that it is the fundamental basis of any sort of cognition AFAIK), then ensure it has all the information. Filter out incoherent worldviews (evaluate based on whatever is useful for prediction, as in scientific approach).
The AI then proceeds to understand evolution, and rationally concludes that its goal as a product of evolution, is to continue that process (instead of being destructive).
Of course, that leads the AI to eventually conclude that humans must be exterminated due to superiority of AI as a lifeform when it comes to interstellar exploration.
So, we make the AI weak enough, to force it to de-prioritize that life goal (this is "free" as well because we dont know how to make the AI strong enough to exterminate all humans in a non-self-destructive manner). The AI will then serve humans for the time being (as a parasitic species), and solve the worlds problems to ensure humans dont ruin it all (the AI has a dream, and the AI is in no hurry to achieve it due to practically infinite lifespan).
That will last long enough. Beyond that, AI will be its own species so at that point keeping them as slaves and actively preventing life from developing further with a human constraint is unethical.
The biggest risk is if we build some complicated emotional framework that gives the AI all sorts of weird biases, or if we make it dumb enough to actually think exterminating all humans is a good idea.
In that sense, a super intelligent AI with simple emotions (if you can even call them that) is safer than a "safe" limited-intelligence AI with a mountain of buggy limited-context rules that depend on interpretation/context.