Let’s start with one of the most-honest ‘fakers’ around. I love this guy! (And he’s honored me by liking some of my stuff.) A force for wisdom in this crazy world - accompanied by lots of surprising laughs - here Penn Jillette takes on the gloom and pessimism that enemies of civilization are spreading in order to demoralize us:
“And yet, with all of this doom and gloom, everything is getting better by every metric we have. Things are getting better if we don’t destroy the planet with global warming and if Donald Trump doesn’t blow things up or Putin blows things up — those are the biggest “ifs” anyone’s ever said. But fewer people are starving. More girls are educated. Fewer people die at the hands of other people than ever in history.
"Those are big milestones. And some people argue — and they might be right — that art was part of that because the idea of reading a novel and putting yourself in someone else’s position, that (was) a huge deal.”
Truly, read this interview with Penn Jillette, one of the most wise-guys of all wiseguys.
(And yes, I make a lot of the same points, citing maybe a hundred works of art - mostly sci fi flicks and novels - that have helped us to avoid fatal errors, across the last century! In Vivid Tomorrows: Science Fiction and Hollywood.)
== Detecting Deepfakes ==
As predicted 26 years ago in The Transparent Society, deepfakes have become a worldwide concern – the technology can be abused to create realistic videos that serve a negative purpose, such as spreading misinformation. One young scientist's AI software program can efficiently detect deepfake media with "state-of-the-art accuracy." But of course his program will then become part of the training regime for next-gen deep fakers!
On the same topic… AI model training datasets may include material scraped from the web, a practice that artists previously by-and-large supported when it was used to index their material for search results, but which now many have come out against because it allows the creation of competing work through AI. Now comes a new open source tool still in development called Nightshade, which can be added by artists to their imagery before they upload it to the web, altering pixels in a way invisible to the human eye, but that “poisons” the art for any AI models seeking to train on it.
== A much better use for these techs! ==
ELI5 is a new use for gollems (Generative Large Language Models) – to ‘gist’ long articles and offer summaries that ‘Explain Like to a 5-year old.’ I had never tried it out, till Peter Diamandis sent me a test run he had done, feeding into ELI5 my big July article in WIRED, that offered some unique takes on Artificial Intelligence (AI.)
If it works, the aim seeks to automate the one discursive process that I've always called the heart of argument among decent people; a trait that always shows which person is being undogmatic and mature.
That trait is willingness to paraphrase your opponent, asking "Is this what you meant to say?"
So, how well did ELI5 perform at automating this paraphasing function, in this case? You can find the original WIRED article (July’23) here. Compare it to this ELI5 distillation:
**ELI5 (Explain Like I'm 5) version** of David Brin's "AI souls" article in WIRED:
Imagine AI systems like really smart robots that can learn and do things on their own. Some people who make these robots are worried they might start doing bad things or become too powerful. So, they think we should take a break from making more robots until we figure out how to make sure they behave well.
One idea is to make each robot have its special ID and a home base, so we know which robot did what and can tell them to stop if they do something bad. It's like giving them a name tag and a home address. We also need rules to make sure the robots follow the rules, and if they don't, they could get in trouble, just like people do.
The big challenge is to make sure these rules work and that the robots can't trick us by pretending to be someone else. We also want the robots to help us by telling us if other robots are doing bad things. This way, all the robots help keep each other in check, and we can all be safer.
So how did ELI5 do? Of course almost none of you can judge, because almost none of you did more than skim the Wired piece (“tl;dr’ right?;-).
I must say that in this case, it does give a useful, first-order, very grainy but not-unfair gist of my first, paramount point – that the key to ensuring decent behavior is not preaching, or rules, but accountability. And the only beings who will be able to sift AI and tattle on malignancy will be... other AIs. And they can't be rivals without behaving as individuals.
Of course there are masses of stuff left out. Especially, it never mentions my supporting point: the reason why individuation is not happening... the pervasive ubiquity of lethally dumb narratives about AI Format...
...and how it will be almost trivially easy (once we overcome dumb narratives) to create incentives so that AIs will feel impelled to individuate.
== AI kept honest by… blockchain? ==
It’s asserted that the truly killer app for blockchain will be tracking the datasets that are used to train AI, which could both ameliorate the ‘black box problem’ of attribution and allow some (as yet to be negotiated) way to compensate people for use of their data. I agree, but there are things unmentioned in this article:
(1) Tracking ID codes for every clump of data will vastly multiply the already enormous energy costs of golem (GLLM) processing.
(2) delivering on that second promise will entail some kind of value transfer in extremely numerous and tiny increments. Call it ‘nano-payments’ or even ‘pico’! And for that to happen we must first build out a badly needed system for micropayments. (Which – BTW – I know how to finally do right! Every attempt so far as made the same, dumb errors.)
An even more important departure from the 2023 GLLM fad appears possible by “active inference” – an agent based system that’s still being born, but that offers much better chances for giving AGI ‘executive function’ or overview – the things that would make them credibly sapient. Further links: here and here.
And above all - for fresh perspectives(!): My Wired article (July'23) breaks free of the three standard 'AI-formats' that can only lead to disaster, suggesting instead a 4th. That AI entities can only be held accountable if they have individuality... even 'soul'...
== Again the cliché, getting transparency all wrong ==
Davood Gozli recites yet another tiresomely arm-waved tome praising ‘privacy’ and denouncing ‘transparency’ in favor of… what? Perhaps some of you might make sense of any hint of a practical recommendation.
While the starting premise is fine – that humans need trust and distance and respect, this entire ‘logical’ argument, about how to get and preserve those good things, is utterly wrong. Civilizations have built and maintained themselves on either of two principles: predatory dominance or reciprocal accountability.
For 6000 years, domineering males – kings, bandits, lords the rich – emphasized the former.
In contrast, we are amid an experiment that has (imperfectly) empowered average people to look back at the mighty (sousveillance) and even (imperfectly so far) hold the mighty accountable for any oppressions. Light is how we deter those who would re-impose beastly feudalism. And propaganda in favor of shadows is exactly what folks like this author and Mr. Gozli are paid to foist upon you.
Dig it. Elites and predators thrive in shadows. Going back to our starting theme, illuminated by Penn Jillette, we have freer lives and are safer from oppression in direct proportion to the extent that average folk can see! Moreover, you have more privacy when you can catch the voyeurs and spies and perverts who try to violate it! In a situation of general transparency, you are able to tell all of those would-be invaders “Leave me alone! Mind Your Own Business (MYOB)! Or else I’ll show all our neighbors (and your mom) what a bully you are.”
Those who are using transparency to oppress are the leaders in countries where ‘transparency’ only applies to the masses, never those in charge. The elites have made themselves safe from reciprocal light. THEY and their shadows are the enemies of freedom and privacy! If you want freedom to do art in private, if you want all the good things Mr. Gozli rails about, then you want your private space surrounded with the light that deters invaders and abusers.
If you want a far better take on this problem than the arm-waved mumbo-jumbo in this “Transparency Society” screed, try an earlier book that’s far more detailed and balanced – The Transparent Society.
The great author Damon Knight wrote a story called "I See You" that takes transparency way farther than even I recommend! And yet that fascinating tale does illustrate the point that you are best left-alone if you can deter those would would invade your space and crush your individuality, instead of trying to protect yourself by *hiding*.
Hear that great - if weirdly optimistic/disturbing - tale here.