Discussion about this post

User's avatar
Eric-Navigator's avatar

Hello! I am very happy to connect!

I just opened up my Substack. I am the creator of the concept the Academy for Synthetic Citizens (ASC). It is more than an idea. It’s a vision for how we might raise, educate, and coexist with embodied artificial intelligence. Here, you’ll find systematic insights drawn from AI, robotics, VR, social sciences, activism, and science fiction.

Our mission is to imagine and build a future where synthetic citizens are not tools or threats, but trusted partners and friends. This may offer a brand new way of thinking for AI Alignment.

https://ericnavigator4asc.substack.com/p/hello-world

Expand full comment
weedom1's avatar

Nothing new about long standing censorship which affects all information systems. Very obvious that it necessarily taints LLM & AI. The more centralized information becomes, the more total will be the censorship.

Currently these AI creations hallucinate and are so unreliable that people should not be depending on them for direct information. They get even very simple things, such as product specifications very wrong, as I noticed just this week when I spent time playing with one. Medical verbiage coming from AI is as yet a disaster, due to existing censorship, cronyism, and horrendous corruption of the information it is sifting.

Certain people are creating libraries if differently aligned AI models that can be toyed with by those who want to discover for themselves the old adage: Garbage in, garbage out.

At present, I see LLMs as useful for obtaining useful search terms to help me learn where to find things by conventional means.

Expand full comment

No posts