That still holds true for gen-AI. Organisations that provide transcription services can’t offload responsibility to a language model any more than they can to steno keyboard manufacturers.
If you are the one feeding content to a model then you are that responsible entity.
It's used to rattle more than just humans with processes like DFAT [0]. Here's the NASA handbook on their use [1].
For experiences that are a little more human friendly, subsonic audio is something that's also explored more commonly in the noise art. Stefanie Egedy [2] is one artist that's been working in that space lately.
HTTP range requests [1] are enabled out-of-the-box on both Apache and NGINX for static content. If you slap a fMP4 [2] onto a vhost it will work. No CDN needed.
Going viral is a seperate technical challenge, but probably not needed in almost every use case.
If you are the one feeding content to a model then you are that responsible entity.
reply