[documentation]

Was thinking about this for a long time, but it would also be perfect for Access News:

Make it easy to listening to and control moving around in audio books/articles/etc. and be able to annotate it using voice commands, haptic interactions, (else?). For example, I drive/walk a lot lately, and listen to all kinds of text via TTS. It would be nice to

Envisioned the text stored in XML (in semantic web style) and audio would be just a representation of that data where markup would aid in navigation, playback, and other actions. (Nothing would prevent presenting it the data/texts as a website, e.g., as a documentation.)