Common wisdom says “MySQL does not support server side cursors”. I searched and did not find anything that indicated this was old information (read on). After a lot of unsuccessful work to get MySQL to handle cursors with 1 million plus rows, I was directed to use Postgres.
After installing Postgres and running benchmarks, I was unable to get the same transaction through-put as I had achieved with MySQL. This may be a tuning issue, but after a week of working on it the best I could get was 20,000 rows per minute in Postgres versus 350,000 per minute rows in MySQL. So back to MySql and rereading the documentation for the MySQLdb interface for Python. Quite by accident I read about SSCursor and SSDictCursor in the Using and Extending section of the documentation. These cursor types will open a server side cursor and will process the large transaction set. As a bonus, it performs almost as well as the regular cursor.
On my i-3, 3gig memory Fedora 17 box, I am achieving over 300,000 rows per minute which is acceptable for the processes being performed.