How shall I design my application for scalability?
And what do you mean by application, for that matter? Do you mean the whole iOS app/PHP/mySql system, or something else?
How to build scalable back-ends, 101:
A typical web scalability stack looks like:
1) Load Balancer (typically, a pair, for redundancy) -- Amazon Elastic Load Balancer can do this for you
2) Application Servers -- these are stateless, and just take web requests (POST, GET, or whatever) and talk to a storage back-end for persistence. These scale very easily, as you just add more of them, running the same code, and the Load Balancers will spread incoming requests to available servers.
3) Storage Back-end -- these are database servers. Here, the data structure of your application matters! That being said, you can use the Amazon RDB service to run MySQL, hosted by Amazon, with a pay-for-performance price list that lets you go pretty high in performance. It's unlikely that your particular game will need more than how far you can go on that -- if it does, you'll hopefully have enough success and revenue to throw engineers at that problem :-)
Also, you'll often want some kind of in-RAM caching as part of your storage back-end -- Memcached, Redis, or similar.
Now, that being said, how you structure the data in the database matters. If you do "JOIN" between two tables, then those tables need to live on the same database server. Thus, if you do a "JOIN" between two separate players, then all players need to live on a single database server for that to work. You want to spread the data such that it doesn't need JOINs most of the time.
For example, a player, their login history, their inventory, and stats, can all live on a particular server for that player. You then allocate players to database servers in some even fashion. For example, players with player id 1,000,000 - 1,999,999 go to server 1, players with player id 2,000,000 - 2,999,999 go to server 2, and so on. (Don't allocate new player IDs sequentially, but rather based on what database server is least loaded right now to put new players on the lowest-loaded server.) Each such database is called a "shard" and the concept is called "horizontal sharding."
Then, when you have operations that absolutely need transactions across tables, put only that data on a separate server. For example, you may have a trade system. To implement trace, objects need to be transactionally moved to the trade system (typically using a transfer queue on each player shard) where the trade is actually settled. Allocating IDs, and auditing things in order, and accounting for failure along each step is important to avoid item duplication bugs or lost items. Similarly, high scores are typically put on a system of its own, where all player scores can be sorted. Because that system doesn't deal with other player things (login/password/inventory/etc) that system will scale further before it runs out of capacity.
However: I really think you'll do fine on Amazon AWS with load balancing, some number of application servers (start with 2, so if one dies, there's another one still running,) and the Amazon RDS for MySQL. You can go very, very, far on that setup, unless your game is crazy and writes the player state to the database every second or somesuch.
Separately, real-time simulations need another structure (crossbar, shared bus, or similar) and player-to-player chat also needs another structure, because they don't scale well in the "stateless app server" model -- because they aren't stateless. If your server needs simulation, you'll need to take another tack for that. But it doesn't sound like that's what you're doing.
The other good news is that you can start with a single server instance, running both app server and database, without load balancing. Store the database back-end on the Elastic Block Store (so it doesn't go away when the instance dies,) and if your game takes off, you can move the data and code to a larger number of servers.