Welcome to the Shroomery Message Board! You are experiencing a small sample of what the site has to offer. Please login or register to post messages and view our exclusive membersonly content. You'll gain access to additional forums, file attachments, board customizations, encrypted private messages, and much more!

TaoinShrrom
The Action inInaction
Registered: 07/03/03
Posts: 98
Last seen: 12 years, 4 months

Trying "quantify" Epistemology
#1915553  09/14/03 03:38 PM (13 years, 7 months ago) 


Greetings,
This is my first post in this forum. I have mostly been focusing on using the resources of this site for cultivation and some antimicrobial research. I have received such great assistance I thought why not float a few other projects.
Essentially what I am trying to do is develop an epistemological model that is quantifiable. I am trying to take off where Karl Popper left off and resurrect the idea of a firm demarcation between science and philosophy. I know this idea is pretty out of vogue for most areas of philosophical inquiry. But I really think there is something to trying to develop ideas of falsifiablity, truth, meaning, and applicability.
Well the meat of the idea is to categorize knowledge based on ?classes? of observations, and the types of theories that are drawn from them. The real stickler is how to quantifiably relate the theories to observations. I am taking my cue from regression lines. I think most theories can be represented as regression line, based on the ?rules? of the theory, traveling through observation points. The properties of the theory, such as falsifiablity, truth, and applicability can then be related to the closeness of fit and predictive nature of the regression line. The ?classes? of observations are simply observations that are most related to the theory, or those that the theory purports to be trying to explain. Theories grow in size as they try to encompass more and more classes of observations. Or even try to predict unobservable classes. You eventually reach such overarching theories as ?God dun it? that they encompass all possible classes of observations, but the falsifiablity of the theory is null because there no way to observe anything outside of the regression lines ?fit.? Where as say, Newton?s theory only makes claim to a small number of classes, the ability for observations to appear outside of the domain of his theory is large enough that it has a high falsifiablity, which seems to make the theory more scientific. Einstein showed that when Newton?s theory is pulled into a wider range of observation classes it starts to fall apart. So he developed a new theory that encompassed more classes under one theory but maintained the falsifiablity of the original ideas. So Einstein and Newton are equally scientific, but Einstein is more preferable because of the larger domain of his theory. But larger domains don?t always relate to better scientific theories if they can?t maintain a degree of falsifiablity.
The nice thing I find about this model is that even after you have long since left the realm of science there is a way to determine superior theories based on the closeness of fit the idea has against its declared classes of observation. Essentially it?s a slight twist on the law of parsimony, but simplicity is not the deciding factor, its closeness to fit. Ideas such as ?God dun it? are very simple, but do to the infinity level of fit they are far less preferable to more complex theories with closer fits. So anyway, if someone has some time to kill please tear me apart! Thanks for the indulgence as well.

Malachi
stereotype
Registered: 06/19/02
Posts: 1,294
Loc: Around Minneapolis.
Last seen: 7 years, 10 months

Re: Trying "quantify" Epistemology [Re: TaoinShrrom]
#1915813  09/14/03 05:35 PM (13 years, 7 months ago) 


that's too verbose for me to take in. could you.... paraphrase?
 The ultimate meaning of our being can only be fulfilled in the paradoxical leap beyond the tragicdemonic frustration. It is a leap from our side, but it is the selfsurrendering presence of the Ground of Being from the other side.
 Paul Tillich

TaoinShrrom
The Action inInaction
Registered: 07/03/03
Posts: 98
Last seen: 12 years, 4 months

Re: Trying "quantify" Epistemology [Re: Malachi]
#1916049  09/14/03 07:00 PM (13 years, 7 months ago) 


Yea, sorry, this is my first time trying to define this outside of my head. It may take me a couple tries.
Let me try to explain the gist of it some what graphically.
Okay, let say this represents a "class" of observations. Observations are divided into classes based on a their perceived connections to each other. Now lets look at a couple different type of theories that could be developed with this class of observations.
The line represents are attempt at theorizing on the nature of the observations. This particular line offers us nothing at all since it can not be applied to observations not specifically made. It is tantamount to saying "given these exact situations, exact place, exact time, a given behavior will result in this given effect." Something like, "I have dropped this pencil in my room on September 14th at 2pm therefore in my room at 2pm on September 14th the pencil will fall to the ground." This "theory" or description is infinitely falsifiable since any part of the line that crosses through an area with out a corresponding observation is false. The problem is that this is basically just a list of observations and conditions and provides us with no extended knowledge. And while its complete falsifiablity seems to deem it empirical I doubt its what anyone strives for in an explanatory thesis.
The next stage up is the classical regression line. This is an attempt linking together the observation class but also being able to predict everything with in that class whether it was specifically observed or not. It is not as easily falsifiable as are first theory, since points can exist outside the line and it?s still valid. The further away the points from the line, and the more outliers there are the less sure we can be of the theory, to the point which it is clearly inadequate and should be abandoned. This graying of the falsifiablity is made up for by its predicative power in describing the complete class of observations.
Other theories can attempt to link seemingly unrelated classes together, or even unobservable classes. But eventually the ability to explain everything makes the theory useless. For example you can imagine a function that is claimed to be a circle with an infinite diameter. This function encompasses all possible observation classes but because of its infinite nature is completely nonfalsifiable and is therefore of less value than another theory, which examines only limited classes but can be falsified. These all encompassing theories are those like ?God dun it? or ?It is because it is.?

Malachi
stereotype
Registered: 06/19/02
Posts: 1,294
Loc: Around Minneapolis.
Last seen: 7 years, 10 months

Re: Trying "quantify" Epistemology [Re: TaoinShrrom]
#1916943  09/15/03 12:55 AM (13 years, 7 months ago) 


ok, I think I understand. I'm going to print this thread out and go over it with my epistemology prof on tuesday, if that's alright with you?
 The ultimate meaning of our being can only be fulfilled in the paradoxical leap beyond the tragicdemonic frustration. It is a leap from our side, but it is the selfsurrendering presence of the Ground of Being from the other side.
 Paul Tillich

TaoinShrrom
The Action inInaction
Registered: 07/03/03
Posts: 98
Last seen: 12 years, 4 months

Re: Trying "quantify" Epistemology [Re: Malachi]
#1917125  09/15/03 01:59 AM (13 years, 7 months ago) 


By all means, I have gotten as fars as graphicaly describing about six diffrent levels of theories, and how to compare diffrent theories at the diffrent levels. But it all rests on the ideas in my first couple of posts. I would be interested in any critisim of it.

Malachi
stereotype
Registered: 06/19/02
Posts: 1,294
Loc: Around Minneapolis.
Last seen: 7 years, 10 months

Re: Trying "quantify" Epistemology [Re: TaoinShrrom]
#1917152  09/15/03 02:07 AM (13 years, 7 months ago) 


I wish I had something constructive to offer, but at this point about all I can do is struggle to comprehend your thought (the graphs helped considerably, though)
I've got some really brilliant profs though, so I'll bump this thread midweek to convey their thoughts to you. in the meantime, don't think too hard, you might break something, and that'd be a shame
 The ultimate meaning of our being can only be fulfilled in the paradoxical leap beyond the tragicdemonic frustration. It is a leap from our side, but it is the selfsurrendering presence of the Ground of Being from the other side.
 Paul Tillich

Sclorch
Clyster
Registered: 07/13/99
Posts: 4,805
Loc: On the Brink of Madness

Re: Trying "quantify" Epistemology [Re: Malachi]
#1917264  09/15/03 02:57 AM (13 years, 7 months ago) 


*reminds self to check this out tomorrow....*
 Note: In desperate need of a cure...

Rhizoid
carbon unit
Registered: 01/23/00
Posts: 1,722
Loc: Europe
Last seen: 9 days, 7 hours

Re: Trying "quantify" Epistemology [Re: TaoinShrrom]
#1917452  09/15/03 06:46 AM (13 years, 7 months ago) 


Quote:
I am taking my cue from regression lines. I think most theories can be represented as regression line, based on the "rules" of the theory, traveling through observation points. The properties of the theory, such as falsifiablity, truth, and applicability can then be related to the closeness of fit and predictive nature of the regression line. The "classes" of observations are simply observations that are most related to the theory, or those that the theory purports to be trying to explain.
A lot of science works exactly like this. The remaining parts also add experimentation to the epistemological process, where arrangements are made to test the predictions of competing theories.
So you're obviously on the right track, but I think you need to quantify parsimony also. Any theory can be extended with some kind of "epicycles" to fit any set of data points. For example, any set of N numbers can be exactly and perfectly fitted by a N1 degree polynomial. Such a polynomial is a perfect theory just like the God theory, but it is still falsifiable. Should it be considered superior to a more parsimonious theory with a nonperfect fit? I think most scientists would disagree.
PS. Look up "Kolmogorov complexity" if you want some ideas on how to quantify parsimony.

TaoinShrrom
The Action inInaction
Registered: 07/03/03
Posts: 98
Last seen: 12 years, 4 months

Re: Trying "quantify" Epistemology [Re: Rhizoid]
#1918015  09/15/03 02:09 PM (13 years, 7 months ago) 


Does the law of parsimony really help us escape "perfect fit" theories? I have always been under the impression that "perfect fit" theories would start to fall apart at the predictive stage. So a theory that is too well fitted to a given set of observations in a class will fall apart when new observations are brought in. You could get into a bit of an infinite regress here always updating to a new perfect fit, but could halt that by simply finding a theory that didn't require redefining after every new observation.
You are right that my handling of Occum's razor isn't highly developed at this point. I am not sure how I really feel about "Kolmogorov complexity." I wonder if a Universal Turning Machine is too deterministic. Maybe you could point me in the direction of using a UTM for stochastic equations? I suppose you could always be feeding in some sort of randomness into the machine, but then doesn?t that make a given program length random as well, so that the minimum set of bits can never really be determined?
At this point I am wondering if some kind of "distance" function could be used for describing complexity but again the stochastic nature of a lot of problems bother me. I wonder if any sort of universal complexity or simplicity can be defined for a given theory. I have a tendency to place a lot of weight then at this point on predictability both inside a given class of observations and then into other classes. For example, can the complexity or simplicity of a quantum probability theorem ever be described universally?
I am not sure. Just some thoughts.

Sclorch
Clyster
Registered: 07/13/99
Posts: 4,805
Loc: On the Brink of Madness

Re: Trying "quantify" Epistemology [Re: TaoinShrrom]
#1918176  09/15/03 03:00 PM (13 years, 7 months ago) 


Hmm... intelligent discussion going on... me like.
As far as this "best fit" business, I've always had a similar mental process going on inside my head  but I call it the Coherence Quotient.
I like what you're trying to do here... but I can't see the practical application of it.
What determines the configuration of the plots for each class of observations?
What determines their location in relation to the classical regression line?
What determines the location of the classical regression line?
You have alot of loose ends that need to be taken care of before you can proceed. Don't be discouraged though.
I wonder if a Universal Turning Machine is too deterministic. Maybe you could point me in the direction of using a UTM for stochastic equations? I suppose you could always be feeding in some sort of randomness into the machine, but then doesn?t that make a given program length random as well, so that the minimum set of bits can never really be determined?
This is NOT free will, in my view. Free will is not just a set of deterministic choice cascades where the initial mover was random input. Though, I will say that it is possible that such processes DO occur in the brain... as the brain is most certainly capable of having patterns (preferences, favorites, etc.) and possibly situations of null preference (either choice cascade is just as viable). Free will is not merely a stochastic equation. I'll stop now, I'm straying from the topic here....
 Note: In desperate need of a cure...

Rhizoid
carbon unit
Registered: 01/23/00
Posts: 1,722
Loc: Europe
Last seen: 9 days, 7 hours

Re: Trying "quantify" Epistemology [Re: TaoinShrrom]
#1918616  09/15/03 05:30 PM (13 years, 7 months ago) 


For stochastic theories, you just use some statistical measure of the observations to fit the theory, instead of using the observations themselves.
One measure of the parsimony of a theory could be the length of the theory in bytes when written down as english text, together with all the parameters it needs to reconstruct all the data (observations) that are under consideration. There can be many such texts for the same theory, since there can be various different formulations of it. The length of the shortest text corresponds to the Kolmogorov complexity. The only difference here is that an Englishspeaking human acts as the interpreter instead of a Turing machine.
Another theory will have a different complexity value, which can be numerically compared with the first. Unfortunately, this complexity measure has the same problem as the Turing machine version: the length of the shortest string depends on the language used and other prior information within the interpreter. But parsimony is a human judgement, so I think this is the best we can do.

TaoinShrrom
The Action inInaction
Registered: 07/03/03
Posts: 98
Last seen: 12 years, 4 months

Re: Trying "quantify" Epistemology [Re: Rhizoid]
#1920315  09/16/03 02:49 AM (13 years, 7 months ago) 


First of all, thanks for who are checking this, I know that epistemology is not really the most glamorous of philosophical investigations. But I really think that anyone interested in any field that attempts to discover any kind of truth (or perhaps the complete absence of truth) has to start with the questions of how theories of our mind actually work. I guess that?s where I find my practical applicability for this. My educational background has wound up fairly varied due to a massive shift in my major, and I have been left to really ponder trying to get a handle on what are the different types of levels of theories. Karl Popper sort of motivated me into this area with his work on conjecture and refutation, but I think one of the problems with his demarcation of science and metaphysics was that he didn?t take into account different levels of theories that create a blending at the demarcation line so that it becomes very difficult to separate empirically grounded metaphysics and highly theoretical scientific theories. I also think that a lot of the methods in analyzing theories that produce measurable observations can be applied, with some changes, to analyzing less observable ideas.
With that said, to address some of the issues brought up.
What determines the configuration of the plots for each class of observations?
Well initially I am basing the plots off of related variables that have a measurable quantity. The idea is that something links the variables together in a cause and an effect manner this purposed casual link is the theory. The plot itself is simply a collection of observations distributed in whatever manner is observed.
What determines their location in relation to the classical regression line?
Well the classical regression line is determined by the global minimization of the square of the difference between points. But in my case the regression line is the predicted values of any given observation based on the cause and effect theory. Any theory can be compared to the classical regression line to determine how well it minimizes the overall error. Theories that fail to get anywhere close to minimizing error should be rejected.
What determines the location of the classical regression line?
Same basic idea. The theory of the cause and effect relationship should make a quantifiable theory that allows a predictive line to be drawn on the plot. The regression line is used as a comparison for error minimization. One note to make is that the classical regression line is not a theory in and of itself since it relies on no cause and effect relationship, purely a correlation, and correlation has little theoretical value.
As far as free will, perhaps another thread about determinism would be interesting. Since my innate desire has always been to find a rational reason for indeterminism, but I have failed on all accounts. My last vestige of hope is quantum theory.
Back to Kolmogorov complexity, when I was talking about stochastic process and the turning machine I had in mind essentially Random Boolean Networks. To me Random Boolean Networks are essentially turning machines with a completely stochastic nature. One application of a RBN, for example, is in determining the differentiation of stem cells. The RBN seems to simulate random genetic network inside cells quiet well. The goal then in achieving and output from a RBN is to reach attractor points, or giant repeating cycles. These cycles correspond to differentiated networks. What then is the most parsimonious program of an RBN? This seems impossible to me to solve using Kolmogorov complexity. Since the attractor cycles are so extremely sensitive to initial conditions both with input and the machine logic itself no one program can ever be guaranteed to create the same output. So if the simplest solution is the comparison of the bit size of the program that generates any given attractor cycle, but the bit size of that program by its very nature is always changing, you seem to have to rely on some other idea to determine simplicity. Descriptive theories relying on the program and machine logic don?t really get anywhere, your best bet is modeling these RBNs with continues ODEs and predicting attractor points and a probability scale. So essentially we are back to the original graphs, prediction the location of attractor points, measurable attractor points can be determined and compared to the predicted areas. What then is the simplest solution? Again, my thoughts in this area are still very early in development. I oscillate between trying to treat the predict attractor points as observation points, or trying to examine the actually ODEs an look for simplicity there.

Rhizoid
carbon unit
Registered: 01/23/00
Posts: 1,722
Loc: Europe
Last seen: 9 days, 7 hours

Re: Trying "quantify" Epistemology [Re: TaoinShrrom]
#1920683  09/16/03 06:23 AM (13 years, 7 months ago) 


The role of the Turing machine is not to emulate reality, it's to predict observations. Stochastic data can't be predicted (by definition) so if stochastic data are involved they must be inputs to the theory. In order to predict a set of observations from a theory, you need some sort of initial input. For example, to predict the speed of a rock that falls under the force of gravity, we need initial measurements of the location and velocity of this rock, in addition to the theory itself. Theory plus Parameters predict the observations.
For every given set of parameters, this prediction is a completely deterministic process. In fact, I don't think a theory is falsifiable at all if it can give different predictions for the same data.
The only reason why one might want something other than a Turing machine to interpret the theory is to avoid the criticism that a Turing machine restricts the range of possible theories. Using a human to interpret the theory removes any such restrictions, if they exist. But even when a human interprets a theory, any calculations that need to be performed will be done exactly as if they were done by a Turing machine, of course.

TaoinShrrom
The Action inInaction
Registered: 07/03/03
Posts: 98
Last seen: 12 years, 4 months

Re: Trying "quantify" Epistemology [Re: Rhizoid]
#1920993  09/16/03 11:27 AM (13 years, 7 months ago) 


But stochastic data can be predicted. Its what I am doing for a living right now. Even with huge Gaussian noise variables inside the equations you always arrive at essentially the same attractor points. The path to reach the attractor points is what changes. The theory doesn't give different predictions for the same data, the attractor points always stay the same, but the path to get to them, or the program itself changes significantly. Essentially you can't make a claim about the number of steps necessary to reach a given attractor point. But that doesn't prevent you from developing a lot of theories about the arrangement, that are quiet falsifiable.
What is the relationship between initial conditions and iterations to reach an attractor?
What is the relationship between the number of attractor points and the number of binary variables and Boolean rules?
I don't think a turning machine alone can address a RBN. Somewhere along the line you have to add an extra step of analysis.

Rhizoid
carbon unit
Registered: 01/23/00
Posts: 1,722
Loc: Europe
Last seen: 9 days, 7 hours

Re: Trying "quantify" Epistemology [Re: TaoinShrrom]
#1921790  09/16/03 04:17 PM (13 years, 7 months ago) 


Quote:
But stochastic data can be predicted.
Not the random elements of such data. If it can be predicted, it's not random. If it can be partially predicted but only up to a bump in a probability distribution, then it's a deterministic function of some random variable that really is completely unpredictable.
Quote:
The path to reach the attractor points is what changes.
So the path is not predicted, just the set of points that shape the attractor. The shape of the attractor is essentially a bump in a probability distribution. But you don't need any randomness in your Turing machine to calculate this probability distribution. All you need is a deterministic procedure to generate sample data that can be used to simulate randomness. A good pseudorandom generator will do the trick if aliasing effects from the sampling procedure is a problem.
I'd better stop, because I realize you must already know that stuff. It's obvious that we must be talking about different things here. What exactly are your definitions of "theory" and "prediction"?
 



