|
lionmane
Edible with Side Effects


Registered: 09/02/19
Posts: 43
Last seen: 6 days, 23 hours
|
|
Came across this quote last night and think it works pretty well within this thread:
"Every time knowledge is gained on how fungi respond to novel conditions or ecologically reflective designs, the skill of the cultivator and the depth of dialogue surrounding cultivation advances one step forward. Without experimentation, we will never fully understand what the possibilities are for working with fungi. Indeed, many of the greatest advances in science have arisen due to accident or intuition. If we only repeat what others tell us to believe about the fungi, we deny our ability to form our own relationships with them. It is by slowing down and paying attention to the responses of fungi that we learn most directly from them, enabling the chance to uncover an understanding of their ways that no book could ever teach.
Peter McCoy - RADICAL MYCOLOGY: A Treatise on Seeing & Working with Fungi | Chapter 8 Working with Fungi (pg. 207)
LLMs are like the person who only learns from books, not from experience itself.. We probably all initially learned cultivation skills from reading / a mentor / teacher - but it was the experience itself that taught us where the textbook knowledge was wrong or misleading. As Peter says, the mushrooms teach you more about mushrooms than people, if you learn to listen. fungi don't text, type, or talk....yet
-------------------- -Absence of evidence is not evidence of absence-
|
redgreenvines
irregular verb


Registered: 04/08/04
Posts: 37,530
|
Re: Exploring AI modeling [Re: lionmane]
#28592427 - 12/21/23 07:40 AM (1 month, 7 days ago) |
|
|
here is an emergent quality of mathematical procedures layered and sequenced https://www.instagram.com/reel/C1AP8FAOndM/ who would have predicted it thus we explore
--------------------
_ ๐ง _
|
Pinkerton
Ultrasentient

Registered: 02/26/19
Posts: 3,127
|
|
Quote:
Pinkerton said: Something is going on.

|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
|
Quote:
redgreenvines said: I do not understand LLM AI well enough to comment. I know it is not what we are, nor is it akin to other living things.
https://bbycroft.net/llm
Only shows the architecture up to GPT3 but the best illustrative guide you will find to start understanding what the basis for LLMs is
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: lionmane]
#28592481 - 12/21/23 08:29 AM (1 month, 7 days ago) |
|
|
LLMs are like the person who only learns from books, not from experience itself..
Book learning is an experience? An experience of learning through text...
But yes other modalities are slower to integrate than text. Text is computationally tiny compared to, say, vision. That's why it's so significant that one large AI model seems to be breaking ground for smaller AI models to achieve many new behaviors but with much less volume of data needed.
Tesla for example used to have to code in traffic laws for their self driving AI. What a stop sign represents. What yellow vs white lines mean. Turn indicators, speed limits, etc.
Now the model is trained using footage of people driving. The AI learns driving behaviors just from observation, and reproduces those behaviors mechanically. This improved the AIs ability to generalize scenarios massively. As not every country, state, or even city uses the same traffic rules and was a painstaking process to remove and write new code every time a rule was different. But using video footage and letting the AI learn for itself the rules? That's become as easy as finding relevant video footage. And massive employee layoffs ensued...
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
redgreenvines
irregular verb


Registered: 04/08/04
Posts: 37,530
|
Re: Exploring AI modeling [Re: Kickle]
#28592519 - 12/21/23 09:08 AM (1 month, 7 days ago) |
|
|
Quote:
Kickle said:
Quote:
redgreenvines said: I do not understand LLM AI well enough to comment. I know it is not what we are, nor is it akin to other living things.
https://bbycroft.net/llm
Only shows the architecture up to GPT3 but the best illustrative guide you will find to start understanding what the basis for LLMs is
Again I am lost hoping that the animation and visuals would help, but the interface is confusing me I am not sure if I should hit the continue button or press spacebar to continue. who designed this interface? There seems to be no way of stepping through slowly or going back

I was lost close to the beginning. and have pressed space bar and clicked the button but did not see a consistent behavior for either.
--------------------
_ ๐ง _
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
|
No clue on the UI designer, seems a passion project. I clicked 'continue' when ready to proceed.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
lionmane
Edible with Side Effects


Registered: 09/02/19
Posts: 43
Last seen: 6 days, 23 hours
|
Re: Exploring AI modeling [Re: Kickle]
#28592540 - 12/21/23 09:20 AM (1 month, 7 days ago) |
|
|
Good point Kickle, the LLM is part of a bigger sensory data stream. It isn't only learning from text - it is using image detection, 'watching' videos, 'listening' to bird calls.
What we were trying to convey. - The book experience is a visualization of a physical action. You read about the TEK. In reading your mind is - visualizing? Then you go do the TEK. You can write about your experience about the TEK after you perform the TEK, capturing the factual data about the TEK and what you learned / observed. As the writer of that text, you may be able to relive some formed memory of that experience. Someone else reading your diary can visualize how you did it / how they might do it. Pick out the real world facts necessary to replicate a process.
Reading is interesting, you try on someone else's knowledge / experience / perspective in your own consciousness. In text form, LLMs can become replicants of these textual reflections of consciousness.
Danielle to CHATgpt: Write about consciousness as if you were Terence McKenna
ChatGPT: Ah, consciousness, the ineffable mystery that dances at the edge of our understanding, an enigma that tantalizes the human mind with its elusive nature. What is it, this elusive force that propels us through the cosmic carnival of existence?
-------------------- -Absence of evidence is not evidence of absence-
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: lionmane]
#28592556 - 12/21/23 09:32 AM (1 month, 7 days ago) |
|
|
The book experience is a visualization of a physical action.
So how do you understand fiction? I've never cast a fire bolt from the palm of my hand. But reading such a thing, I can imagine it all the same.
Now, you may say that's because I have hands and have seen fire. That's a good response imo. Combining known elements is exactly what LLMs seem to do in order to increase their understanding of unknown elements, and then they test, test, test to evaluate the degree of interconnection.
The very fact that interconnection is a basis for understanding in an LLM is where world view makes sense. I think a world view is understanding interconnection. And since LLMs have been trained on the human worldview in an incredibly diverse way, what emerges is a pretty encompassing worldview that reflects humanity. And some of that is very ugly, dishonest, misleading, etc. We try to remove that reflection before we give it back for broad human consumption. But we are hesitant to knock out creativity, which may be seen as ugly or dishonest or misleading depending on context.
|
Freedom
Pigment of your imagination



Registered: 05/26/05
Posts: 5,851
Last seen: 7 minutes, 35 seconds
|
Re: Exploring AI modeling [Re: Kickle] 1
#28592574 - 12/21/23 09:56 AM (1 month, 7 days ago) |
|
|
Quote:
Kickle said: Noise is more important in some modalities than others. But cleaning data has become largely automated now where it used to be quite manually intensive. What a human used to have to spend hours doing is now fairly well defined and understood by AI.
Like Google's magic eraser tool, or Photoshop's generative fill. Such manipulations used to be manual and tedious. Now AI groks the concept from basic inputs and executes quite effectively.
But large data samples remain important for minimizing a multitude of potential errors. Oddly the trajectory I see happening is that one large model is enough. There is no need for lots of large models. Because a small model can reap most of the benefits of a single large model paving the way.
I didn't mean cleaning data, I'll give an example.
so the neural network needs to 'learn' what the word "run" means. If it only has one sentence, thats basically just pure noise, the 'meaning' it creates for the word run from one sentence would be basically meaningless.
As you get more data, i imagine it then comes up with something similar to the most common usuage of the word run. But then at some point, suddenly a new meaning for the word run 'emerges', and it also learns the context for when it has one meaning and when it has another. this is a good example of resolution - what looked like one fuzzy thing called run becomes many things (merriam webster has 15 transitive and 16 intransitive definitions)
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: Freedom]
#28592577 - 12/21/23 10:01 AM (1 month, 7 days ago) |
|
|
Ah. Yeah that happens. I think what baffles most people is when it goes from being trained only on words to being able to understand something like this:
๐๐
To represent linguistically: Finding Nemo
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
Freedom
Pigment of your imagination



Registered: 05/26/05
Posts: 5,851
Last seen: 7 minutes, 35 seconds
|
Re: Exploring AI modeling [Re: Kickle]
#28592580 - 12/21/23 10:06 AM (1 month, 7 days ago) |
|
|
it just seems like pattern recognition to me. linguistic patterns, visual patterns etc. can even search for novel patterns with specifc critera
|
lionmane
Edible with Side Effects


Registered: 09/02/19
Posts: 43
Last seen: 6 days, 23 hours
|
Re: Exploring AI modeling [Re: Kickle] 1
#28592582 - 12/21/23 10:08 AM (1 month, 7 days ago) |
|
|
Kickle~ Your example takes things in interesting directions.
We understand fiction through imagination like you have described.
You can only imagine shooting the firebolts from your hands. [Unless you are 'shooting fire bolts' with a flamethrower. But then you are no longer 'only human' - you are part cyborg using a physical technology to augment your human body much like we are right now as we leverage our computer and network connection to transmit this message.]
Physics imposes limits on the truth of words. There is fiction and non-fiction. Science-fiction does become factual as technology improves, and machine learning speeds the rate at which technology improves because of the vast and growing depth of compute.
Here's where it get's interesting: an autonomous LLM can imagine your fiction of firebolts from hands, and then physically manufacture a robot that could literally shoot fire from its flamethrower hands. Hopefully we have aligned it not to have such a goal!
-------------------- -Absence of evidence is not evidence of absence-
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: Freedom]
#28592583 - 12/21/23 10:08 AM (1 month, 7 days ago) |
|
|
In that same lens I'd say human learning is just pattern recognition. But it's still incredible when the unexpected happens.
Nothing is disconnected. I don't think that's the claim. But it is unexpected and impressive. We do not understand how it is able to make connections to information it was not explicitly trained on. Nor why a large amount of data seems to spur such outgrowth.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
Freedom
Pigment of your imagination



Registered: 05/26/05
Posts: 5,851
Last seen: 7 minutes, 35 seconds
|
Re: Exploring AI modeling [Re: sudly] 1
#28592587 - 12/21/23 10:14 AM (1 month, 7 days ago) |
|
|
Oh yeah it is impressive, amazing,
i find similiar pheonomena as I age, new qualities of life emerge, or ways of living that are totally unexpected and sometimes it feels almost orchestrated its so perfect
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: lionmane]
#28592592 - 12/21/23 10:19 AM (1 month, 7 days ago) |
|
|
Hopefully we have aligned it not to have such a goal!
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
sudly
Darwin's stagger

Registered: 01/05/15
Posts: 10,797
|
Re: Exploring AI modeling [Re: Kickle]
#28592668 - 12/21/23 12:09 PM (1 month, 7 days ago) |
|
|
Quote:
Kickle said: You are the teacher, the AI is the tool, its usefulness comes from your ability to mold it.
Why a tool and not a student? Do you think that learning is an inaccurate way to look at what leads to behavioral changes?
This is a 'tool' that is shaped through dialogue. And the way it appears to change is through understanding. This seems more similar to how we see students progress and less similar to how we see tools invented.
Because it isn't sentient.
If anything the user can be the student too. The student and the teacher.
-------------------- I am whatever Darwin needs me to be.
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: sudly]
#28592690 - 12/21/23 12:28 PM (1 month, 7 days ago) |
|
|
I guess if that's your line, that's your line.
For me the similarity in the learning process has more relevance to my linguistic choice than an indeterminate requisite such as sentience. But I do understand for some it's very important and focal.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
sudly
Darwin's stagger

Registered: 01/05/15
Posts: 10,797
|
Re: Exploring AI modeling [Re: Kickle]
#28592710 - 12/21/23 12:47 PM (1 month, 7 days ago) |
|
|
I'm not comfortable with anthropomorphising chat gpt, but giving it some personification? Sure.
Quote:
Anthropomorphism vs. Personification: Whatโs the Difference?
Personification and anthropomorphism are similar literary devices with a few key distinctions. Personification is the use of figurative language to give inanimate objects or natural phenomena humanlike characteristics in a metaphorical and representative way. Anthropomorphism, on the other hand, involves non-human things displaying literal human traits and being capable of human behavior.
-------------------- I am whatever Darwin needs me to be.
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: sudly]
#28592726 - 12/21/23 01:06 PM (1 month, 7 days ago) |
|
|
I would contend that holding up a mirror to humanity, looking at the reflection, and saying "that's not human" is true in one sense and not in another.
But there's lots of ideological play in such a thing. And while many elements of AI are reflective of what it has been trained on (humanity), calling it mirror-like does not very well encompass what AI does in order to get to such a point. But there are reflections all the way along.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
|