In this blog, I'll briefly explain the methods users have used in the past to effectively navigate touch screens with a screen reader. I then present some thought experiments for more efficient screen reader interactions with touch screens, and their limitations and possibilities.
Current Touch screen navigation methods.
When IOS first introduced a rotor, a concept that was revolutionary at the time to the mobile touch world of screen readers, it made a lot of difference. There was a fairly efficient method for users to navigate by any arbitrary element that could be thought up. The way it works is an invisible dial is turned by placing one finger in the middle of the screen, and then moving another finger around that finger. As the dial turns, things such as characters, words, headings, ... are moved over. when the user stops turning the dial, they can then flick down to move to the next item of that type.
I am not sure what this type of menu is called, but I'll refer to it as a "gear menu." A "gear menu" is a menu where there is a central cancel action, and other actions radiating out from the sides of the gear. The gear appears around the location where the user places their hand. As the user moves their hand out from the center, they encounter the various menu options. They can simply lift their finger to activate that menu option.