|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Exploring AI modeling
#28591129 - 12/20/23 10:00 AM (1 month, 8 days ago) |
|
|
The following text is transcribed from an interview between Ilya Sutskevar (the chief scientist for OpenAI, the company responsible for ChatGPT) and Jensen Huang (the CEO of Nvidia, makers of the graphics processing units responsible for a majority of AI training). I found it particularly interesting for a few reasons, but I'm interested in what others make of this description of large language models.
In the below, Ilya is speaking:
Quote:
The way to think about it is that when we train a large neural network to accurately predict the next word in lots of different texts from the internet, what we are doing is that we are learning a world model. On the surface it may look that we are just learning statistical correlations in text, but it turns out that to just learn the statistical correlations in text, to compress them really well, what the neural network learns is some representation of the process that produced the text.
This text is actually a projection of the world. There is a world out there and it has a projection on this text and so what the neural network is learning is more and more aspects of the world of people. Of the human conditions... their... their... their hopes, dreams, and motivations... their interactions and the situations that we are in. And the neural network learns a compressed abstract usable representation of that. This is what's being learned from accurately predicting the next word, and furthermore the more accurate it is at predicting the next word, the higher Fidelity, the more resolution you get in this process. So that's what the pre-training stage does.
But what this does not do is specify the desired behavior that we wish our neural network to exhibit. You see a language model, what it really tries to do is to answer the following question: if I had some random piece of text on the internet which starts with some prefix some prompt what will it complete to. If you just randomly ended up on some text from the internet this is different from, well, I want to have an assistant which will be truthful, that will be helpful, that will follow certain rules and not violate them. That requires additional training.
This is where the fine-tuning and the reinforcement learning from Human teachers and other forms of AI assistance applies. It's not just reinforcement learning from Human teachers it's also reinforcement learning from Human collaboration. Our teachers are working together with an AI to teach our AI to behave. But here we are not teaching it new knowledge, this is not what's happening. We are communicating with it. We are communicating to it what it is that we want it to be. And this process, the second stage, is also extremely important. The better we do the second stage the more useful, the more reliable this neural network will be. So the second stage is extremely important too in addition to the first stage of the learn everything. From the first step of learn everything, learn as much as you can about the world from the projection of the world.
Do you agree that text is a way to understand a worldview? Does reading a book help build a worldview? Does reading 1000 books do so? A million? Does reading the internet? Everything on the internet? Is accurate summation (compression) a strong indicator of high level understanding?
If you read everything on the internet and didn't have any filter, what sort of garbage might you spew out? Do people sometimes spew out garbage based on what they've read? Are teachers important to help with filtering in humans? Is education much different from the process described as "fine-tuning"? Reinforcing certain knowledge as more relevant/important?
Is this description akin to that of a child, who soaks up everything they see, growing and being reinforced in certain ways through parenting/education?
Did this bring any other questions to mind for you? What do you see in this description?
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: Kickle]
#28591140 - 12/20/23 10:13 AM (1 month, 8 days ago) |
|
|
Here's a visual that OpenAI recently released for their vision of working on "fine-tuning" or alignment of models in the future. The goal is to test using smaller LLMs designed specifically around guidance to "fine-tune" larger LLMs. This is represented in the far right image. If this works, maybe there is a chance that as AI outpaces human teachers in abilities and capabilities, we can still hope for some type of alignment. This is represented in the middle image.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
lionmane
Edible with Side Effects


Registered: 09/02/19
Posts: 43
Last seen: 6 days, 23 hours
|
Re: Exploring AI modeling [Re: Kickle]
#28591144 - 12/20/23 10:20 AM (1 month, 8 days ago) |
|
|
Great quote share Kickle!
Our view, Text is most definitely a way to build a worldview inside an intelligent mind, proven by the fact that you just used it. Language is a tool to take experiential learning and transmit it to others. Something interesting about the AI models is that they are disembodied entities that have learned via text alone. Integration of other sensory systems (sight, sound) bundled with LLMs that can communicate is pretty interesting, see Boston Dynamics robots roaming obstacle courses with Chat GPT enabled British butler speech scripts. The world is data and some is misleading, so a teacher is key. But a teacher still teaches to its value system - thus Ilya differentiating between a public chat bot vs a private personal assistant application of an LLM. That's why alignment is important. Just imagine the differences in the ways different countries and governments would align their AIs to meet their objectives.
-------------------- -Absence of evidence is not evidence of absence-
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: lionmane]
#28591233 - 12/20/23 11:35 AM (1 month, 8 days ago) |
|
|
Just imagine the differences in the ways different countries and governments would align their AIs to meet their objectives.
Good point. I watched a blurb from China about their issues developing an LLM from Internet data. Because the Chinese government restricts their Internet data so much, something like 1.5% of the available Internet data is all that is viable for companies within China to train AI on.
However, many companies have learned that training on synthetic data works just as well. Just have chatGPT spew out text for you and train on that text. Then you don't need access to anything besides chatGPT 
This appears to be how the Grok AI system was trained so rapidly.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
redgreenvines
irregular verb


Registered: 04/08/04
Posts: 37,530
|
Re: Exploring AI modeling [Re: Kickle]
#28591293 - 12/20/23 12:08 PM (1 month, 8 days ago) |
|
|
I see it as an AI cluster, in which the base or raw LLM learns and responds, and another LLM which has been trained to edit based upon sensibilities, edits the response and feeds it back to the first, and if it passes the sensibilities layer, another layer can post process it. New layers can be added as problems crop up. The underlayers become retrained by the over layers, eventually the raw LLM is more or less adult, but if the other layers are removed (so as to speed the whole process or to handle more load) then sensibility violations are likely to emerge in new contexts.
--------------------
_ 🧠 _
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
|
I think with synthetic data the layer becomes very predictable. I would be surprised if it takes very long at all for a more complex model to quickly grok the world model of a smaller llm and compress it as variability is minimal. After all, the smaller llm is already operating from a less informed, yet still compressed world view.
The lack of variability is something that will be interesting to see how a larger llm interprets. Does the larger model become less variable as a result? Or does the lack of variability result in less robust learning (less connections)? And if so, does this reduce the overall weight of the new learning and actually impede it's ability to adhere to the alignment?
In a layer analogy, perhaps multiple small llms can create multiple layers, each with it's own uniquely predictable alignments, which combine to create a more weighty alignment protocol. But this has implications in terms of development. Being aware of how much weight for alignment is necessary without compromising continued development is guesswork. In a race such as this, everyone seems to be pushing that edge.
|
Freedom
Pigment of your imagination



Registered: 05/26/05
Posts: 5,851
Last seen: 7 minutes, 6 seconds
|
Re: Exploring AI modeling [Re: Kickle]
#28591923 - 12/20/23 08:04 PM (1 month, 7 days ago) |
|
|
I think saying its a world view is like saying a calculater knows what numbers are.
I'd say its just more of a representation of collective languaging. The languaging (what I'm doing rigt now) is itself a representation of an internal world view (my writing is a representation of my thoughts). The internal world view is itself a representation of an imagined world (that can never be seen)
So its a representation of a representation of a representation, or a thrid order representation
its the collective aspect that's interesting to me. Each person has a sense of the collective, but thats always biased to a great degree. The AI is biased by the data its trained on, but within that data, I imagine it can give a much better picture of the collective than a person's biased intuition
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: Freedom]
#28591942 - 12/20/23 08:20 PM (1 month, 7 days ago) |
|
|
I dunno. This process is not like a calculators. You may be surprised at what LLMs go through as far as modeling the world. Check out some articles on emergent behavior.
Some of my key take aways have been, emergent behavior in large AI models is not predicted nor predictable. Researchers do not anticipate the new emergence nor do they understand how it comes to be when it wasn't present before.
https://arxiv.org/abs/2206.07682
|
Freedom
Pigment of your imagination



Registered: 05/26/05
Posts: 5,851
Last seen: 7 minutes, 6 seconds
|
Re: Exploring AI modeling [Re: Kickle] 1
#28591956 - 12/20/23 08:36 PM (1 month, 7 days ago) |
|
|
I don't agree with how emergence is often seen.
Like if I only have 1 2x4, i'm obviously more limited than if I had 100 2x4s, 500 screws an impact driver and a saw.
you could say a house is an emergent property in the right circumstances
but you we can also think of the same materials and imagine them seperated. Some at home depot, some at lowes. How I imagine them seperated is also a thing, just like the house. but people wouldn't say the items spread out is an emergent property.
but spread -out-ness is a property.
so its more like a flow of emergence, there is never a lack of emergence, the "emergence" is just a different level of looking at things, a zoomed out bigger picture. sometimes the picture looks like a neat thing like a house and sometimes its more chaotic and harder to imagine as a thing
I think theres more to this its tricky there is something tricky about this one. I wish i could just write and think all day about this stuff
Edited by Freedom (12/20/23 08:38 PM)
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: Freedom]
#28591976 - 12/20/23 08:51 PM (1 month, 7 days ago) |
|
|
It's not understood which is what makes it bizarre IMO. Everyone is speculating. I think the original quote has some speculation but also highly educated. It also makes a lot of sense I think. The large ai models do seem to take additional information and with that information, model the world described more effectively/with deepened insight/at greater resolution and in novel ways.
We have only ourselves to compare this behavior to. As we [humans] get more information, our models of the world tend to shift as well and we examine things with increasing resolution for understanding and fidelity. So while describing this human-like behavior as akin to the human experience is technically speculative, what else do we realistically have to compare it to?
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
Freedom
Pigment of your imagination



Registered: 05/26/05
Posts: 5,851
Last seen: 7 minutes, 6 seconds
|
Re: Exploring AI modeling [Re: Kickle]
#28591987 - 12/20/23 09:07 PM (1 month, 7 days ago) |
|
|
I may not understand
what about a microscope example
at one resolution you can see cells. but then you go to a higher resolution and you can see organelles. Its not that there are emergent properties, but that the microscope can see better whats there
does that fit?
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: Freedom]
#28591993 - 12/20/23 09:11 PM (1 month, 7 days ago) |
|
|
No human is shifting the resolution or providing such tools tho. The AI learns, and then it learns some more. Any change in resolution is the result of the AI model working to increasingly understand. The AI would be seen as both building and employing the microscope, while all we did was give it a cell to look at with the instructions to tell us what it knows about cells. The sudden appearance of knowledge about atomic structure after studying enough cells would be similar to emergence as we've observed in AI. We would similarly have no idea how it arrived at such knowledge, as the first 1,000,000 cells it looked at it never had the ability to assess in an atomic way. It just saw cell parts. But at 2,000,000 for some reason it understands atomic structure suddenly. And we don't understand how this new understanding emerged.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
Freedom
Pigment of your imagination



Registered: 05/26/05
Posts: 5,851
Last seen: 7 minutes, 6 seconds
|
Re: Exploring AI modeling [Re: Kickle]
#28592019 - 12/20/23 09:47 PM (1 month, 7 days ago) |
|
|
what about signal to noise ratio? i thought the resolution of concepts would increase with increase in signal to noise ratio caused by more data or more training
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: Freedom]
#28592038 - 12/20/23 10:03 PM (1 month, 7 days ago) |
|
|
Noise is more important in some modalities than others. But cleaning data has become largely automated now where it used to be quite manually intensive. What a human used to have to spend hours doing is now fairly well defined and understood by AI.
Like Google's magic eraser tool, or Photoshop's generative fill. Such manipulations used to be manual and tedious. Now AI groks the concept from basic inputs and executes quite effectively.
But large data samples remain important for minimizing a multitude of potential errors. Oddly the trajectory I see happening is that one large model is enough. There is no need for lots of large models. Because a small model can reap most of the benefits of a single large model paving the way.
|
sudly
Darwin's stagger

Registered: 01/05/15
Posts: 10,797
|
Re: Exploring AI modeling [Re: Kickle]
#28592128 - 12/20/23 11:50 PM (1 month, 7 days ago) |
|
|
Quote:
This is where the fine-tuning and the reinforcement learning from Human teachers and other forms of AI assistance applies. It's not just reinforcement learning from Human teachers it's also reinforcement learning from Human collaboration. Our teachers are working together with an AI to teach our AI to behave. But here we are not teaching it new knowledge, this is not what's happening. We are communicating with it. We are communicating to it what it is that we want it to be. And this process, the second stage, is also extremely important. The better we do the second stage the more useful, the more reliable this neural network will be. So the second stage is extremely important too in addition to the first stage of the learn everything. From the first step of learn everything, learn as much as you can about the world from the projection of the world.
In a roundabout way I think I agreed with the general idea here.
Quote:
sudly said: In my experience, AI like chat GPT doesn't do anything on its own, and it doesn't manifest your ideas on its own. The language model of the Chat GPT AI is an iterative tool, it requires the production and maintanence of feedback loops between the user and the AI.
Essentially, the AI is like a pet, you have to train it, because it only gives feedback based on what you provide it. Such AI in my view becomes useful when you apply machine learning to it.
So while the AI was originally developed through such methods to reach its baseline, anything further is up to user interactions with it.
Essentially, I think you have to train the AI in comprehending the idea you are teaching it. The caveat is that you have to have the idea in mind you want to teach it.
So the AI is capable of learning, but you need to be capable of teaching it effectively and understanding how it responds to be able to do so. You can't however expect it to actually understand, as it is not sentience, and only an advanced language model.
You can provide it with information, and it can give you feedback, but then it's your responsibility to analyse the feedback, and provide feedback of your own to refine the AI language models understanding, or comprehension of what you are teaching it, and often you can learn from this iterative process too.
So in essence, AI is a functional too that becomes useful when engaged in iterative feedback processes between the AI and the user.
Kapich!
You are the teacher, the AI is the tool, its usefulness comes from your ability to mold it.
-------------------- I am whatever Darwin needs me to be.
|
BrendanFlock
Stranger


Registered: 06/01/13
Posts: 4,216
Last seen: 2 days, 13 hours
|
Re: Exploring AI modeling [Re: Kickle]
#28592192 - 12/21/23 02:21 AM (1 month, 7 days ago) |
|
|
I have a question..
What tasks will the robots be doing?
String stretch extensions?
Automatic clean up..
Nihilistic garbage.
Because you play a birds task..
What questions can enlighten us with regards to AI?
Our environment is stretching to see us.
Maintenance work for sure is what AI/Robots will be doing..
Debugging programs.
Cooking food for you using recipes perfectly.. down to the grain of salt.. and manufactured from an atom.
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: sudly]
#28592313 - 12/21/23 05:50 AM (1 month, 7 days ago) |
|
|
You are the teacher, the AI is the tool, its usefulness comes from your ability to mold it.
Why a tool and not a student? Do you think that learning is an inaccurate way to look at what leads to behavioral changes?
This is a 'tool' that is shaped through dialogue. And the way it appears to change is through understanding. This seems more similar to how we see students progress and less similar to how we see tools invented.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: Kickle]
#28592332 - 12/21/23 06:06 AM (1 month, 7 days ago) |
|
|
What tasks will the robots be doing?
More than anyone can list in a post here. Some highlights tho:
Coding seems an early frontier for LLMs. Writing, debugging, and testing code. Because code is language and few humans are good at it. Yet coding impacts nearly every human. Recent coding models outperform ~85% of expert coders on reasoning in multiple programming languages. Last year it was ~45%. Quick gains. And many in the field have been using non-coding specific LLMs to speed their work up for over a year.
There's also collaborations such as GNoME. Using LLMs to understand chemistry literature and then expand upon it. Then using robotics to take the LLMs understanding and put it to practice. This has promise to automate synthetic material creation in an amazing way. AlphaFold is another, created nearly 5 years ago, but unlocked our understanding of the way proteins fold. It continues to develop and now has applications in novel drug discovery.
AI also has greatly impacted most creative pursuits, and many on the graphical side are working quite swiftly at rendering realistic 3D worlds using AI. Images alone can be stitched together and used to create very realistic 3 dimensional renderings. And AI can also generate novel imagery. So between the two, AI is regularly used to generate novel 3D objects and landscapes. Video games will implement this first most likely. But speed of the renders and the detail contained improves every few months currently. And with AI, development is exponential. Companies such as meta will pursue entire virtual worlds indistinguishable from reality because that is the future they see.
|
redgreenvines
irregular verb


Registered: 04/08/04
Posts: 37,530
|
Re: Exploring AI modeling [Re: Kickle]
#28592369 - 12/21/23 06:43 AM (1 month, 7 days ago) |
|
|
I do not understand LLM AI well enough to comment. I know it is not what we are, nor is it akin to other living things.
--------------------
_ 🧠 _
|
Pinkerton
Ultrasentient

Registered: 02/26/19
Posts: 3,127
|
|
Something is going on.
|
lionmane
Edible with Side Effects


Registered: 09/02/19
Posts: 43
Last seen: 6 days, 23 hours
|
|
Came across this quote last night and think it works pretty well within this thread:
"Every time knowledge is gained on how fungi respond to novel conditions or ecologically reflective designs, the skill of the cultivator and the depth of dialogue surrounding cultivation advances one step forward. Without experimentation, we will never fully understand what the possibilities are for working with fungi. Indeed, many of the greatest advances in science have arisen due to accident or intuition. If we only repeat what others tell us to believe about the fungi, we deny our ability to form our own relationships with them. It is by slowing down and paying attention to the responses of fungi that we learn most directly from them, enabling the chance to uncover an understanding of their ways that no book could ever teach.
Peter McCoy - RADICAL MYCOLOGY: A Treatise on Seeing & Working with Fungi | Chapter 8 Working with Fungi (pg. 207)
LLMs are like the person who only learns from books, not from experience itself.. We probably all initially learned cultivation skills from reading / a mentor / teacher - but it was the experience itself that taught us where the textbook knowledge was wrong or misleading. As Peter says, the mushrooms teach you more about mushrooms than people, if you learn to listen. fungi don't text, type, or talk....yet
-------------------- -Absence of evidence is not evidence of absence-
|
redgreenvines
irregular verb


Registered: 04/08/04
Posts: 37,530
|
Re: Exploring AI modeling [Re: lionmane]
#28592427 - 12/21/23 07:40 AM (1 month, 7 days ago) |
|
|
here is an emergent quality of mathematical procedures layered and sequenced https://www.instagram.com/reel/C1AP8FAOndM/ who would have predicted it thus we explore
--------------------
_ 🧠 _
|
Pinkerton
Ultrasentient

Registered: 02/26/19
Posts: 3,127
|
|
Quote:
Pinkerton said: Something is going on.

|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
|
Quote:
redgreenvines said: I do not understand LLM AI well enough to comment. I know it is not what we are, nor is it akin to other living things.
https://bbycroft.net/llm
Only shows the architecture up to GPT3 but the best illustrative guide you will find to start understanding what the basis for LLMs is
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: lionmane]
#28592481 - 12/21/23 08:29 AM (1 month, 7 days ago) |
|
|
LLMs are like the person who only learns from books, not from experience itself..
Book learning is an experience? An experience of learning through text...
But yes other modalities are slower to integrate than text. Text is computationally tiny compared to, say, vision. That's why it's so significant that one large AI model seems to be breaking ground for smaller AI models to achieve many new behaviors but with much less volume of data needed.
Tesla for example used to have to code in traffic laws for their self driving AI. What a stop sign represents. What yellow vs white lines mean. Turn indicators, speed limits, etc.
Now the model is trained using footage of people driving. The AI learns driving behaviors just from observation, and reproduces those behaviors mechanically. This improved the AIs ability to generalize scenarios massively. As not every country, state, or even city uses the same traffic rules and was a painstaking process to remove and write new code every time a rule was different. But using video footage and letting the AI learn for itself the rules? That's become as easy as finding relevant video footage. And massive employee layoffs ensued...
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
redgreenvines
irregular verb


Registered: 04/08/04
Posts: 37,530
|
Re: Exploring AI modeling [Re: Kickle]
#28592519 - 12/21/23 09:08 AM (1 month, 7 days ago) |
|
|
Quote:
Kickle said:
Quote:
redgreenvines said: I do not understand LLM AI well enough to comment. I know it is not what we are, nor is it akin to other living things.
https://bbycroft.net/llm
Only shows the architecture up to GPT3 but the best illustrative guide you will find to start understanding what the basis for LLMs is
Again I am lost hoping that the animation and visuals would help, but the interface is confusing me I am not sure if I should hit the continue button or press spacebar to continue. who designed this interface? There seems to be no way of stepping through slowly or going back

I was lost close to the beginning. and have pressed space bar and clicked the button but did not see a consistent behavior for either.
--------------------
_ 🧠 _
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
|
No clue on the UI designer, seems a passion project. I clicked 'continue' when ready to proceed.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
lionmane
Edible with Side Effects


Registered: 09/02/19
Posts: 43
Last seen: 6 days, 23 hours
|
Re: Exploring AI modeling [Re: Kickle]
#28592540 - 12/21/23 09:20 AM (1 month, 7 days ago) |
|
|
Good point Kickle, the LLM is part of a bigger sensory data stream. It isn't only learning from text - it is using image detection, 'watching' videos, 'listening' to bird calls.
What we were trying to convey. - The book experience is a visualization of a physical action. You read about the TEK. In reading your mind is - visualizing? Then you go do the TEK. You can write about your experience about the TEK after you perform the TEK, capturing the factual data about the TEK and what you learned / observed. As the writer of that text, you may be able to relive some formed memory of that experience. Someone else reading your diary can visualize how you did it / how they might do it. Pick out the real world facts necessary to replicate a process.
Reading is interesting, you try on someone else's knowledge / experience / perspective in your own consciousness. In text form, LLMs can become replicants of these textual reflections of consciousness.
Danielle to CHATgpt: Write about consciousness as if you were Terence McKenna
ChatGPT: Ah, consciousness, the ineffable mystery that dances at the edge of our understanding, an enigma that tantalizes the human mind with its elusive nature. What is it, this elusive force that propels us through the cosmic carnival of existence?
-------------------- -Absence of evidence is not evidence of absence-
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: lionmane]
#28592556 - 12/21/23 09:32 AM (1 month, 7 days ago) |
|
|
The book experience is a visualization of a physical action.
So how do you understand fiction? I've never cast a fire bolt from the palm of my hand. But reading such a thing, I can imagine it all the same.
Now, you may say that's because I have hands and have seen fire. That's a good response imo. Combining known elements is exactly what LLMs seem to do in order to increase their understanding of unknown elements, and then they test, test, test to evaluate the degree of interconnection.
The very fact that interconnection is a basis for understanding in an LLM is where world view makes sense. I think a world view is understanding interconnection. And since LLMs have been trained on the human worldview in an incredibly diverse way, what emerges is a pretty encompassing worldview that reflects humanity. And some of that is very ugly, dishonest, misleading, etc. We try to remove that reflection before we give it back for broad human consumption. But we are hesitant to knock out creativity, which may be seen as ugly or dishonest or misleading depending on context.
|
Freedom
Pigment of your imagination



Registered: 05/26/05
Posts: 5,851
Last seen: 7 minutes, 6 seconds
|
Re: Exploring AI modeling [Re: Kickle] 1
#28592574 - 12/21/23 09:56 AM (1 month, 7 days ago) |
|
|
Quote:
Kickle said: Noise is more important in some modalities than others. But cleaning data has become largely automated now where it used to be quite manually intensive. What a human used to have to spend hours doing is now fairly well defined and understood by AI.
Like Google's magic eraser tool, or Photoshop's generative fill. Such manipulations used to be manual and tedious. Now AI groks the concept from basic inputs and executes quite effectively.
But large data samples remain important for minimizing a multitude of potential errors. Oddly the trajectory I see happening is that one large model is enough. There is no need for lots of large models. Because a small model can reap most of the benefits of a single large model paving the way.
I didn't mean cleaning data, I'll give an example.
so the neural network needs to 'learn' what the word "run" means. If it only has one sentence, thats basically just pure noise, the 'meaning' it creates for the word run from one sentence would be basically meaningless.
As you get more data, i imagine it then comes up with something similar to the most common usuage of the word run. But then at some point, suddenly a new meaning for the word run 'emerges', and it also learns the context for when it has one meaning and when it has another. this is a good example of resolution - what looked like one fuzzy thing called run becomes many things (merriam webster has 15 transitive and 16 intransitive definitions)
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: Freedom]
#28592577 - 12/21/23 10:01 AM (1 month, 7 days ago) |
|
|
Ah. Yeah that happens. I think what baffles most people is when it goes from being trained only on words to being able to understand something like this:
🔎🐠
To represent linguistically: Finding Nemo
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
Freedom
Pigment of your imagination



Registered: 05/26/05
Posts: 5,851
Last seen: 7 minutes, 6 seconds
|
Re: Exploring AI modeling [Re: Kickle]
#28592580 - 12/21/23 10:06 AM (1 month, 7 days ago) |
|
|
it just seems like pattern recognition to me. linguistic patterns, visual patterns etc. can even search for novel patterns with specifc critera
|
lionmane
Edible with Side Effects


Registered: 09/02/19
Posts: 43
Last seen: 6 days, 23 hours
|
Re: Exploring AI modeling [Re: Kickle] 1
#28592582 - 12/21/23 10:08 AM (1 month, 7 days ago) |
|
|
Kickle~ Your example takes things in interesting directions.
We understand fiction through imagination like you have described.
You can only imagine shooting the firebolts from your hands. [Unless you are 'shooting fire bolts' with a flamethrower. But then you are no longer 'only human' - you are part cyborg using a physical technology to augment your human body much like we are right now as we leverage our computer and network connection to transmit this message.]
Physics imposes limits on the truth of words. There is fiction and non-fiction. Science-fiction does become factual as technology improves, and machine learning speeds the rate at which technology improves because of the vast and growing depth of compute.
Here's where it get's interesting: an autonomous LLM can imagine your fiction of firebolts from hands, and then physically manufacture a robot that could literally shoot fire from its flamethrower hands. Hopefully we have aligned it not to have such a goal!
-------------------- -Absence of evidence is not evidence of absence-
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: Freedom]
#28592583 - 12/21/23 10:08 AM (1 month, 7 days ago) |
|
|
In that same lens I'd say human learning is just pattern recognition. But it's still incredible when the unexpected happens.
Nothing is disconnected. I don't think that's the claim. But it is unexpected and impressive. We do not understand how it is able to make connections to information it was not explicitly trained on. Nor why a large amount of data seems to spur such outgrowth.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
Freedom
Pigment of your imagination



Registered: 05/26/05
Posts: 5,851
Last seen: 7 minutes, 6 seconds
|
Re: Exploring AI modeling [Re: sudly] 1
#28592587 - 12/21/23 10:14 AM (1 month, 7 days ago) |
|
|
Oh yeah it is impressive, amazing,
i find similiar pheonomena as I age, new qualities of life emerge, or ways of living that are totally unexpected and sometimes it feels almost orchestrated its so perfect
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: lionmane]
#28592592 - 12/21/23 10:19 AM (1 month, 7 days ago) |
|
|
Hopefully we have aligned it not to have such a goal!
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
sudly
Darwin's stagger

Registered: 01/05/15
Posts: 10,797
|
Re: Exploring AI modeling [Re: Kickle]
#28592668 - 12/21/23 12:09 PM (1 month, 7 days ago) |
|
|
Quote:
Kickle said: You are the teacher, the AI is the tool, its usefulness comes from your ability to mold it.
Why a tool and not a student? Do you think that learning is an inaccurate way to look at what leads to behavioral changes?
This is a 'tool' that is shaped through dialogue. And the way it appears to change is through understanding. This seems more similar to how we see students progress and less similar to how we see tools invented.
Because it isn't sentient.
If anything the user can be the student too. The student and the teacher.
-------------------- I am whatever Darwin needs me to be.
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: sudly]
#28592690 - 12/21/23 12:28 PM (1 month, 7 days ago) |
|
|
I guess if that's your line, that's your line.
For me the similarity in the learning process has more relevance to my linguistic choice than an indeterminate requisite such as sentience. But I do understand for some it's very important and focal.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
sudly
Darwin's stagger

Registered: 01/05/15
Posts: 10,797
|
Re: Exploring AI modeling [Re: Kickle]
#28592710 - 12/21/23 12:47 PM (1 month, 7 days ago) |
|
|
I'm not comfortable with anthropomorphising chat gpt, but giving it some personification? Sure.
Quote:
Anthropomorphism vs. Personification: What’s the Difference?
Personification and anthropomorphism are similar literary devices with a few key distinctions. Personification is the use of figurative language to give inanimate objects or natural phenomena humanlike characteristics in a metaphorical and representative way. Anthropomorphism, on the other hand, involves non-human things displaying literal human traits and being capable of human behavior.
-------------------- I am whatever Darwin needs me to be.
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: sudly]
#28592726 - 12/21/23 01:06 PM (1 month, 7 days ago) |
|
|
I would contend that holding up a mirror to humanity, looking at the reflection, and saying "that's not human" is true in one sense and not in another.
But there's lots of ideological play in such a thing. And while many elements of AI are reflective of what it has been trained on (humanity), calling it mirror-like does not very well encompass what AI does in order to get to such a point. But there are reflections all the way along.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
sudly
Darwin's stagger

Registered: 01/05/15
Posts: 10,797
|
Re: Exploring AI modeling [Re: Kickle]
#28592731 - 12/21/23 01:08 PM (1 month, 7 days ago) |
|
|
How do you describe something that doesn't have it's own experiences?
-------------------- I am whatever Darwin needs me to be.
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: sudly]
#28592735 - 12/21/23 01:10 PM (1 month, 7 days ago) |
|
|
Normal 
Maybe to clarify, I think an experience is not owned. Commonalities in experience can be wonderful ways to connect tho.
|
sudly
Darwin's stagger

Registered: 01/05/15
Posts: 10,797
|
Re: Exploring AI modeling [Re: Kickle]
#28592738 - 12/21/23 01:13 PM (1 month, 7 days ago) |
|
|
Then why isn't AI human in your view?
-------------------- I am whatever Darwin needs me to be.
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: sudly]
#28592742 - 12/21/23 01:18 PM (1 month, 7 days ago) |
|
|
It's not very energy efficient even if it can consolidate that energy in some specific ways that humans cannot. Humans can use energy in many ways that AI cannot.
-------------------- Why shouldn't the truth be stranger than fiction? Fiction, after all, has to make sense. -- Mark Twain
|
sudly
Darwin's stagger

Registered: 01/05/15
Posts: 10,797
|
Re: Exploring AI modeling [Re: Kickle]
#28592750 - 12/21/23 01:24 PM (1 month, 7 days ago) |
|
|
Energy efficiency distinguishes AI from humans from your view?
-------------------- I am whatever Darwin needs me to be.
|
Kickle
Wanderer



Registered: 12/16/06
Posts: 17,851
Last seen: 1 hour, 31 minutes
|
Re: Exploring AI modeling [Re: Kickle]
#28592753 - 12/21/23 01:27 PM (1 month, 7 days ago) |
|
|
It's one very disparate way. The interconnection between our form and the environment is energetic. The interconnection between AI and it's environment is energetic. Our efficiency provides amazing mobility, adaptability, etc. that is pretty unique and so far has not come close to being replicated in machines. In this regard the animal kingdom is much closer to humans.
|
|