[documentation]
Was thinking about this for a long time, but it would also be perfect for Access News:
Make it easy to listening to and control moving around in audio books/articles/etc. and be able to annotate it using voice commands, haptic interactions, (else?). For example, I drive/walk a lot lately, and listen to all kinds of text via TTS. It would be nice to
add marks at point that I would like to re-visit later
add notes (e.g., audio notes that would be STT’d back)
go back/forward one sentence/paragraph
spell last/current word (e.g., modal player like vim? enter SPELL mode)
add semantic markup automatically (this may be machine learning territory because what is a sentence? what is a word? It would be nice to slowly build an ontology/thesaurus in a certain topic by starting out with a small list of expression that is compared to the current text -> new marks in the text will be added to the vocabulary with personal annotations, already present expressions in the list will be marked up in the current text and then stored. Future iterations would enrich the vocabulary, and an automated system could tend to the already saved and read texts to mark up the new additions)
what else?
Envisioned the text stored in XML (in semantic web style) and audio would be just a representation of that data where markup would aid in navigation, playback, and other actions. (Nothing would prevent presenting it the data/texts as a website, e.g., as a documentation.)