2 Comments
User's avatar
Mark Lancaster's avatar

Interesting piece. The debate often conflates aligning behavior/capabilities with censoring content. Engineering safety through clearly defined operational boundaries & controllable autonomy levels might be a more robust path than focusing solely on filtering outputs, mitigating some censorship concerns while still addressing core safety needs.

Expand full comment
weedom1's avatar

Nothing new about long standing censorship which affects all information systems. Very obvious that it necessarily taints LLM & AI. The more centralized information becomes, the more total will be the censorship.

Currently these AI creations hallucinate and are so unreliable that people should not be depending on them for direct information. They get even very simple things, such as product specifications very wrong, as I noticed just this week when I spent time playing with one. Medical verbiage coming from AI is as yet a disaster, due to existing censorship, cronyism, and horrendous corruption of the information it is sifting.

Certain people are creating libraries if differently aligned AI models that can be toyed with by those who want to discover for themselves the old adage: Garbage in, garbage out.

At present, I see LLMs as useful for obtaining useful search terms to help me learn where to find things by conventional means.

Expand full comment