deft/notes/ai_and_death_of_the_web_as_we_know_it.org
Yann Esposito (Yogsototh) c1d2459d0c
save
2024-08-14 11:35:42 +02:00

107 lines
5.1 KiB
Org Mode

:PROPERTIES:
:ID: 1f142832-05f9-4280-a8ca-aa6f35209f91
:END:
#+title: AI and Death of the web as we know it
#+Author: Yann Esposito
#+Date: [2024-05-16]
- tags :: [[id:a5be1daf-1010-428f-a30f-8faf95c1a42f][blog]]
- source ::
* Constat
First, since a few years now, we are experiencing a huge acceleration of the
"enshitification" of the web.
Most of us relied on services and soon, most of these servies will be worse or
completely will not provide the same benefits as before.
A typical example, as a software engineer we often searched technical knowledge
using google. Quite often we ended up on Stack Overflow, or reddit, or twitter, etc…
Now with the promise of infinite AI Generated +Spam+ SEO content. Most search will
point to a terrible website full of ads, with the additional cost that the
content could not be trusted as it was not just copied from a reliable source,
no, worse, it will be invented by the AI that has a tendency to hallucinate its
answer and pretty often return wrong and even potentially dangerous ones.
Typically, imagine generated recipes, people made some experimentation and the
generated recipe are good to put you to the Hospital if you follow these advises.
So now, that's it. We are loosing our ability to more or less, trust random
content from the web.
Is this the end?
Perhaps.
Is there something we could do about it?
I think so yes :)
And this problem already is mostly solved using the notion of "Web of Trust".
Web of Trust is a decentralized system that help you trust resources.
But you have you word to say. For example, if you trust someone for a while and
they change, they start to put horrible ads, AI generated content on their
content. You simply "downvote" or "block" them. All your direct connection of
the network of trust will be impacted by your decision, and if enough people
like you start to dislike the new content. The content of this user will
disappear forever.
This is a bit like Reddit karma, but instead of the mechanism being centralized
and controlled by a single source. This is distributed on the customers. Some
might enjoy a user, for them that user will have a big note, for other it will
be not enjoyable and his note will be very low. So low, you will almost never be
exposed to the content produced by this user.
That will probably solve a first issue. Remove from our collective sight all the
SEO spam website/content, etc…
Now, what about discoverability? Being able to search for content using this new knowledge?
Here we have different multiple solutions:
1. Still rely on classical search engines but use a browser plugin to filter the
results with only website with a trust value that is high enough
2. Use the "Web of Trust" to the rescue. We could have servers taking care of
downloading the website from the most trusted websites (starting from a few
trusted people) and open source the algorithm so people could spawn that
system on their local computer or host it and provide their server to their friends.
And we will have a very small web at first, but with a quality value that
should be very high as compared to the "Big Web".
* BONUS
I think one issue with the "Web of Trust" is the ability for attacker to "steal
an identity" of a trusted producer and produce in its name.
In particular, if the "Web of Trust" simply uses domain names, these are know to
rot easily, and could be taken.
For this, one simple but efficient mechanism will simply be to cryptographically
sign your content.
So instead of having a "web of trust" that is using domain name, we could
additionally add GPG signatures. This could be added in the header of the HTML
pages, this way a browser that will be "Web of Trust"-friendly could display a
green mark saying "Hey this content was really produced by this user with this
value of trust".
This would probably change how we use the web, because it will forces us to
"vote" time to time. Probably with more and more subtleties. For example with
different level of like/dislike in order to be able to completely block some
sources, and not just make them less prominent.
* Last but not least some wise words from Socrates
Don't forget what Socrates has to say about the invention of writing:
"For this invention will produce forgetfulness in the minds of those who learn
to use it, because they will not practice their memory.
Their trust in writing, produced by external characters which are no part of
themselves, will discourage the use of their own memory within them.
You have invented an elixir not of memory, but of reminding; and you offer your
pupils the appearance of wisdom, not true wisdom, for they will read many things
without instruction and will therefore seem [275b] to know many things, when
they are for the most part ignorant and hard to get along with, since they are
not wise, but only appear wise."
We put our confidence in a shared memory, it was great knowledge sharing.
And with the recent changes it appears we will need to regress and use our
memory, read books, read man pages, go to official documentation website at
best.
I feel the potentially single way to solve this issue is perhaps with a "Web of Trust" that will drastically reduce the size of our shared memory.