Do Humans Operate Like Computers? (Kant) – 8-Bit Philosophy

Do Humans Operate Like Computers? (Kant) – 8-Bit Philosophy

Does a human being operate differently than a computer? If I am aware of how a computer is programmed, I can predict how it will respond to input, and therefore I can control it. What if science could quantify our brains and reduce them to incredibly complex algorithms? Could human behavior then also be predicted? And, if so, are we responsible for our actions? Immanuel Kant addressed this problem of determinism in his book, “The Critique of Practical Reason,” which is about as dry as it sounds. Kant says that humans, like all else in the physical world, are subject to cause and effect. For example, if I am ill, I will want to be healthy. Or, if I have low health, I can’t help but want a health pack. However, human beings have the ability to will. And, this “will power” is free. Even though I’m at low health, I can deny the desire to refill it. If we weren’t free, we wouldn’t be able to deny our inclinations through what Kant calls the moral law. So, let’s assume there are two players. Player 1 has over half health and player 2 has almost none. Even though player 1 wants a health pack, he believes it is his duty to give it to player 2. Unlike computers, human beings know what they ought to do. Thus his famous aphorism: ought implies can. So, if you recognize an obligation, then you are implicitly free to do it or not to do it. If humans were pure calculation like computers, there would be no sense of ought — we would just do. Now, there are many philosophers who scoff at the idea that humans can transcend cause and effect. Some say we have no freedom whatsoever. But, as science progresses, only time will tell where the line lies between man and machine.

100 thoughts on “Do Humans Operate Like Computers? (Kant) – 8-Bit Philosophy

  1. I am not sure that human beings always even knew what they ought to do. I believe that the 'oughts' evolved as society evolved. For example couldn't the same patterns playout with computers and to some extent aren't they already – such as the JS library hover intent – which instructs the computer to infer the user's intent based off mouse movement speed and other factors. With this in mind consider that children learn to share by parents pointing out times that is both socially acceptable to be selfish and when there is an implied agreement to share. For example halloween candy – generally it is ok to be a bit stingy ( perhaps even expected ) but if someone has a lot less than you – sharing is both implied and often enforced by the parents. Naturally not every parent believes the same or have different tolerance of difference which is obviously as behaviors vary from child to child. However I am not sure that distinction can be made between social training and programming that accounts and attempts to infer user intent based on behavior. Now consider two people meeting from vastly different times – for example in the US a various points in our history shooting someone for calling you are liar was both acceptable it even had rules around how to employ it and legal ramifications for failing to show up. That said I believe we can all agree that a duel today would be not only seen as illegal it would also likely be viewed as immoral. So if a person from the past were to meet a person from the future and ask them to duel who would be the morally superior? On the face of it anyone reading this will likely side from the future person – as a world in which we solve problems via dueling would suck – but if the challenge was made in the past timeframe wouldn't the future person be obligated to surrender his moral beliefs in light of beliefs of that time. Ultimately if what is 'ought' is mutable how can moral law be a measure of consciousness – wouldn't it be a better measure of self organization and social complexity rather than individual identity? Unless of course we look at each choice as completely individual action??

  2. yo, thinking logically isnt always the answer, because if you think about it logic was created for us to comprehend how stuff works, so in reality logic doesnt make sense, the true way to understanding is spirituality, like connecting with life around you. like knowing your connected with every living thing on earth, like knowing spirit does what it wants and is not limited. people tend to think there are limits but not really, the mind is very powerful and there is a lot of stuff in the world that is very possible that humans would think is entirely impossible. limiting what you think is possible will get you nowhere, never limit anything, come at everything with an open mind and that just because youve never heard of it, seen it, or cant comprehend it, doesnt mean its wrong.

  3. Simple calculation: 80% Player 1 + 100% Player 2 > 100% Player 1 + 20% Player 2.
    It's not an obligation to let player 2 have that health pack; it's a necessity for greater chances of survival as a team.

  4. Determinism actually isn't that interesting of a question.

    Free will can be a property of an object in a system, but it's just a little bit tricky to connect philosophical and mathematical language and figure out exactly what that means.

    The video equates 3 things that actually aren't necessarily equivalent mathematically.  It equates a complete description of an object's behavior with the ability to predict all responses and an ability to control it.

    If you have a complete description of an object's behavior, that doesn't give you the ability to predict all responses.  In order to predict the behavior using the description, you have to run computations.  All of a sudden you run into problems like the entscheidungsproblem and the halting problem.

    Even if you have the ability to predict, once again, that does not give you the ability to control.  Your ability to control depends on 3 things: observe the state of the object and system, perform independent computations, and influence the object/system.

    In mathematics, no one spends any time talking about controlling objects.  What you control is response variables in a system.  These response variables could be attributes of particular objects or other attributes of a system.

    The determinism of the universe says nothing about the validity of terms such as "free will", etc.  The validity of words such as "free will" only depends on where you draw boundaries within a system.  If you distinguish between things that happen in your mind, and things that happen outside of it, then the term free will makes perfect sense, determinism or no.  IMEO (in my exalted opinion) it is perfectly reasonable and logical to draw a boundary between things that happen inside your mind versus things that happen outside it.

    The other thing that people tend to get wrong is they think free will is about having a soul.  Somehow it is about having something that is not part to the physical universe, but which is still connected to and interacts with the physical universe.  Guess what, science has already answered this question.  Our minds are actually part of the physical universe, our minds are software executing on the hardware of our brain which exists inside the universe and functions following all the same rules of physics that the matter external to our brain follows.  However, there still is a strong boundary between what happens in our minds and what happens outside our minds.  Thus it makes sense to think of our mind as seperate from the physical universe, but still connected to the physical universe.  However, this seperation merely exists because there are strong boundaries, not because it is some kind of soul created from non physcial mystery matter.

    Our sense of wonder and mystery and power related to the soul is insightful and in fact accurate given what we currently know about the potentially complex behavior of computational systems.  Whenever mathematicians have tried to create theorems intrinsically limiting or completely describing what computational systems can do, they have come up empty handed, and only been able to show that certain things are impossible to prove or solve in all cases.

    As to the question of whether we are responsible for our actions, once again, that depends on where we draw the boundaries.  The pronoun "I" refers to an independent acting and loosely coupled part of the universe.  Yes, that part is responsible for its actions, that's one of the features that makes it intelligent.  It also makes sense for other loosely coupled and independent acting parts of the universe to consider part responsible for its actions and to hold it accountable when appropriate.  This doesn't inherently give one part control over another, it's more like a mutual interaction.

    TLDR; Earlier philosphers struggled to resolve physics and philosophy, because they didn't understand the way we create boundaries between living intelligent beings and the rest of the world in order to better describe the interaction between living intelligent beings and the rest of the world.  Now we understand these boundaries and relationships, as well as how philosophical ideas correlate with physical and computational ideas, so these questions are solved.

  5. Do we have the ability to will, or is it just an illusion caused by our inability to identify and understand all the variables related to our decision making process? Impossible to answer, yet.

  6. The only reason why we have an ought that is available to us, is not because we are flesh and bones or because of the soul. It's because our machinery is more complex than a computer, – for now that is.

  7. Whenever I see a human turn on its creator, because it has the capacity for emotion, I always ask, "Why the hell would you give a robot emotions in the first place?! It's made to do a job, that's in a situation where no human could possibly do it! It doesn't need any emotions to do that job!"
    I mean, at some point, it's gonna' stop taking orders, and all those emotions probably take up a lot of space.

    We humans think a lot of ourselves, even though we're all pretty stupid.
    (Come on, admit it. You're not all that.)

    We can choose what we want, but not what we get.
    You want to be at Point B, so you start off at Point A.
    There's no telling what will be in between A and B, but you know that there's a certain space between them.
    If there are obstacles in our way, we have to get over them.
    (Point A1, Point A2, Point A3, A4, A5, A6, etc.)
    Even if we find a better route, we still have those obstacles, starting from Point C.
    (Point C1, Point C2, Point C3, C4, C5, C6, etc.)

    It's not so much about us behaving like computers.
    It's about knowing that we, as humans, have the ability to start off with emotions, the ability to choose, and later become set in a mechanical line of thought. After all, robots can only learn HOW to feel emotions, right?


  8. Cause and effect affect us and makes our brain work. There is only one  possible choice for one person in the exact same situation. You could think that means you have no free will. However, you are not dissociated from the physical nature of your brain. Your personality is defined by the state of matter in your brain. Therefore, when you make a choice, it was physically impossible to make another, it is still a choice, as, had you had another personality (configuration of matter within your brain) you would have made a different choice.

  9. No, we made computers, so the question is are computers starting to operate like us? Dont you agree Wisecrack? Do you see video games as the start of Robotic AI and Holograms?

  10. There is no moral "ought" Humans act out of Social needs so if someone is starving but you yourself are hungry you will give the other person food if it serves your social need more than your physiological one. Those who go by "oughts" are doing so out of fear of punishment, not pro-social well being, and it would be a disingenuous charity. 

  11. ought (verb): (1) used to indicate duty or correctness. (2) used to indicate something that is probable.

    Let's assume (for the time being) that humans are like machines, and a sense of duty or morality doesn't exist. In this case, the word "ought" can only imply probability, or that a person is merely likely to do something.

    Machines are already capable of this approximate response though fuzzy logic, where binary logic is not always used and partial knowledge is allowed. While this does not completely confirm that humans are like machines, it does allow the possibility that this is the case.

    This conclusion may be faulted for it's circular reasoning, or it's overly linguistic nature that potentially distorts the meaning of the word "ought." I merely present this so that it may be criticized.

  12. I think this video is under-thinking it. Yes, humans are like machines, brains are like computers, what other alternative is there?

  13. IMO the human brain ultimately works like a computer of sorts, just with a radically different architecture and complexity. It also doesn't compute precise mathematical values and logic, but fuzzy one.

  14. I'm confused… When I see the question of what I ought to do in a given scenario is no different then ask what is the most efficient action in a given scenario. Isn't that how all basic Artificial Intelligence is programmed to do?

  15. Wouldn't a better question be: "are we designing the way computers act based on how we understand ourselves to act?"

    The behavior of allowing a socially accepted other to obtain the health pack over your own, lesser, need is explainable, predictable, and can be programmed into the actions of a machine.

    Our brains work with complicated input and reasoning.  Computing can be just as complicated.  Comparing computer programming to "me want food now" is disingenuous.

  16. Sort of assumes the machine would subscribe to "individuality," and does not take into account its ability to see another as a valuable "comrade" (i.e. extra source of damage, information, external function). Is it not possible there could be varying degrees of hive mentality? In a way, couldn't human empathy work in the same way, some type of incredibly low-lying hive mind resulting in cooperation, and thus, the success of a species?

  17. But what about sociopaths? They don't have the "ought". They will just do. Are they more computer than anyone else then?

  18. But isn't this reflection on whether or not to act on what we ought to do, also subject to cause and effect? If the mind is truly a mere biological machine, then why should we not be able to predict it's "output" so to speak if we completely understood the human brain? If the choice can be predicted then it isn't a choice. Of course that surely doesn't mean that there is no point to make the choice, regardless of whether or not free will is an illusion.

  19. But is the choice of the healthpack a real choice? How do we know that previous experiences and perception haven't already determined what player 1 will choose? After all we always choose the "best" option. What that "best" option option would be is surely determined by our perception. Do we OWN our own perception?

  20. I think Kant really meant that "if there are occurences where we ought to do something, but do something else instead – that means we have freedom". Because in situation where you REALLY feel you ought to do something (not because someone told you, not because someone forces you, but you geniunely believe you ought to do it) – you are kinda "determined" to do it, since the decision that it is a right way is "already made". Still, besides being made, this decision must be also "carried out" – and sometimes, mysteriously, we do not carry it out. And afterwards, looking back, we may think that we were free – we did wrong, but we were free, because, after all, there was no reason for us to "do wrong", so it must be this mystical un-determined freedom. Actually, it is not quite correct, though – psychology teaches us that there are many unconcsious processes that we are not aware of, so even if all the thoughts and feelings we REGISTER tell us we "ought" do something, we end up not doing it because of something that we do NOT register, but which is as potent, or even more potent. But at the time of Kant it may very well have been unknown about unconscious processes. So the situation where a person knows what he is going to do, but then does otherwise, could hardly be interpreted some other way than "mystical freedom from determinism"

  21. If I coded a computer to recognize other's suffering, and help it heal them if they are more hurt than itself, then it has a sense of right. If I then teach it that it can identify if that person will be able to help himself to have the machine help itself, then it has a sense of ought. Therefore we are complex machines, and that's fucking scary.

  22. No, no, no, Just because our morals point to ought instead of do, does not mean that it is not a mere calculation! Because ought is calculated by how much the action benefits us, INDIRECTLY…

  23. Surely, if computers were set up to work in Groups, like humans, they would understand the "ought", not because of moral, but because every link in their Group needs to be well balanced. If one falls, or suffers greatly, it would mean a huge disadvantage in the functioning of the setup.

  24. The more I learn about philosophy the more I learn what I already know, free will is non-existent and nothing ever changed since the creation of gods. We only call it science nowadays and yes, even that makes us discover we are slaves to our 'program' so does God exist? Pretty much. Does he have a plan? Yessir. Are you gonna rage about this? Why should I even doubt it. Enlightenment is a farce, I can write a book tomorrow and call my new philosophy "blabberism" and it will tell you just the same like everyone that came before me, I'll just use different words so you are convinced this is actually the new big thing in human evolution. Truth is, words are quite meaningless. They're limited so it's natural that what they try to convey is repetitive. Only, a good writer knows how to deceive the reader.

    Why don't you go outside and play, look at some pretty butterflies and feel amazed how all of this ever started existing just by chance. Stop caring what it should mean to all of us, care what it means to you and share it with the world. Be an artist for humanity's sake.

  25. How about if we create an advanced AI, so advanced that it can make its own choices and such the way we do. The AI was programmed, it was designed.
    If, or when, we can create an AI that thinks and reasons like humans do, the question will be answered: Either we are like a machine, with every type of predetermined outcome programmed into us, or we were just programmed to not be like a machine. Or something else.

  26. It is very easy to prove determinism as a fact of the universe.

    Imagine that a mind knew the position and velocity of every particle in existence: because those 2 values are not subject to random variance and are the cause of every effect, the mind would accurately predict the future.

    All decisions made by agents are the result of collisions between particles in the brain and because those collisions can be predetermined by positions and velocity, free will does not exist.

  27. But there is a reason for you to think to give the health pack away and deny it, just like when there is a update for a current software, the computer notifies the user to install it to update the computer making the computer better just like getting a health pack it will help your progression of surviving in your current state.

  28. But it's statistically and intrinsically better for player 2 to get the health kit, because Player 2 is on his side, and if they die, that's terrible. By giving him the health kit, you are increasing chance of success. You want to succeed,  therefore you give it to him. THe only reason you would take it for yourself is if you decided that you didn't need them. Also, you WANT to keep friendship with Player 2, and you WANT health kits when you want it most.

  29. I would argue that the desire to give something to someone else rather than ourselves has also been programmed into us through social engineering. We are social creatures, and in order to survive through the ages, we've had to take care of one another. If we didn't, humans would've been extinct a long time ago. The desire to provide for another could be programmed into our brains in the same way we care for the self. Animals also do this, just look at elephants, or mice, or even cats. The natural instinct to provide or feed is something almost all animals posses, and I think could be argued is a part of their own natural programming. After all, genes are nature's "binary code." 

    You could also argue that deciding NOT to help the other person is a part of our programming, for the person could've done it for humor's sake (which animals also do) or even just a compulsive need to protect the self. 

    So to answer the question, do we operate like computers? Well, we may not be like the computers we have today, but I think we share many similarities, and eventually computers could become exactly like us.

  30. The difference lies more or less in programming. Once we can get computers to "understand" vague concepts, programming in morality won't really be that hard.

  31. Well, the philosophy here assumes that a computer can not observe benefits of letting others take what we could. It was incredibly over simplified. 
    From the example, the health pack gave +4 health on pick up. For the computer in question to take it, then it would waste 3 health, diminishing its value, where as if it was given away, it gets maximum efficiency. A computer with only the goal of getting maximum efficiency would choose to let the other have it.
    But…then what if the other person subsequently attacks the computer, with the newly boosted health. They each lose 1 health each attack, and attack at the same time. Now, if the computer gives the health pack away, there's a loss of 8 health, as well as the loss of the computer's life. wah wah.
    If the efficiency computer could think of this, then clearly the best tactic is to take the health pack, so that there'd be a net loss of only 1, rather than 8.
    But how do you determine the chances of you being attacked, rather than teaming up? And what about running? …and it gets more and more complicated, but it can be simplified and quantified by deduction. Although then there's uncertainties, which are handled by chance prediction and determination, and defining what is an acceptable risk.

  32. 1:24 you can program ai to let you have the health pack if you need it,
    you can also program it to choose randomly whether to give it to you or not

    Example pseudo code;

    if player2.hp < hp
    //Ignore the healthpack
    //take the healthpack

  33. Of course, there is the issue that the machine might also in the fullness of time have free will, and calling people "just machines" would doing machines a disservice.

  34. Assuming that the universe is deterministic, then even if choices are predetermined by cause and effect that doesn't mean the choices don't exist. Event A causes Person 1 to choose choice Z over choice Y. There choice is there and so is the action of choosing but the underlying force that drives everything: cause and effect determines that person 1 will choose Z.
    I don't think it matters if the universe is deterministic or not.

  35. I think that 'what we ought to do' is still predetermined. If we took a person in the present, and worked through every single event in their life, I think that you could 100% divine what their actions in a situation with a moral question would be. It's these events that make up what a person is. It's influence, upon influence, upon influence. There was influence on your grandfather, that decided how to raise your father, who he would socialize with, where he would live, and what he would experience, at least until your father got older. However, those events that he experienced would stay with him, and generally mold who he is as a person. Say that his father was abusive. Yes, he could follow in the footsteps that his father left him, and become abusive as well, or he could deny it and become a better person, but I would argue that his decision would be based entirely on the people, and events he has met with during his life, which were set by his father. And then your father has you, a submits you to his influences, and it repeats.

  36. At the end of the day the decision to deny the health pack is also just a whole bunch of chemical reactions in our brain that follow the laws of nature and are therefore deterministic/predicatable
    Until science proves that there are truly random macroscopic(not quantum world) events happening in our brains the hard determinism is the most plausible scenario.
    But even if that gets proven at some point, I would still argue that randomness isn't freedom.
    The freedom we think about when we talk about free will just doesn't make sense when you really think about it.
    So whether everything we are, think and do is deterministic or not is still up in the air, but free will is a concept we can definitely rule out.

    So either everything we ever think and feel was predetermined on from the point the universe came into existance and therefore predictable if you had the knowledge over every single particle in the universe and its relations to all other particles…
    Or there are random events happening that affect things and therefore prohibit completely accurate predictions.

    But a random event means in no way that freedom(whatever freedom may mean on a scientific level) comes into the system.

  37. I would argue yes. The human body is a machine, the brain is a processor, and the mind is a complex computer program, the source code was written by evolution, and then debugged by society.

    The primary directive of that program (ignoring, for the moment, certain subroutines) is to accumulate a maximum positive-emotion/time integral. Those positive emotions could be pleasure, meaning, excitement, victory or others, but the priority of those varies from person to person, hence why some are hedonists, while others are crusaders.

    There's nothing in life a human being does that isn't motivated by some emotional reward, even if it's more opportunities to try for those rewards.

  38. I would argue that the "will" is simply a longer played out version of cause and effect. The only difference between our brain and a computer is simply the complexity of it.

  39. Sorry but "OUGHT" is a subroutine of the program. It's code is made up of memories and patterns established when we are young. Yes the programming is complex, but then it's been coded for hundreds of thousands of years. Compare computers of the 80's with today's computers and imagine that complexity compounded over our entire evolutionary journey to what we are today. Yes, humans and computers are very much alike.

  40. For some reason I'm reminded of the Chinese room (the thought experiment about the difference between a human inputting answers mechanically without understanding the nature of those answers and a computer doing the same)

  41. How we operate in our daily lives comes from electrical and chemical reactions in our brain. There's no room for this free will nonsense

  42. If a computer could be programmed to choose between two outcomes based on more than just it's own well-being, then it is up to the programmer to determine what the preferred outcome is. Humans operate the same way, but there happens to be a much, much larger number of programmers at work, in both a literal sense (our family, friends, etc), and a figurative sense (observations, environment, etc). In the case of a sociopath, the individual has been programmed for a variety of values outside the norm of other individuals, but they are just as programmed as the rest of the population.

    One could argue that such a person, or even the rest of the population is not as deliberately programmed as a machine, but I don't see how it can be argued that our decisions are not the sum of societal and environmental programming.

  43. The anomaly of free will is effected by fear. Computers can't feel fear. Fear of death or loss being big factors on governing our actions. For example, if I am "forced" to do something or I will die and I choose death because I do not for it; then the force that is threatening tries to find another fear to exploit. It could be the death of another you love like your child. Feelings and apathy towards others governs decision making just as much as fear.

  44. In a deterministic universe, nothing can transcend cause and effect. Everything is a pattern, and all patterns can ultimately be described with mathematics. The brain, existing within the universe, is also deterministic and, therefore, a pattern. Because all patterns can ultimately be described with mathematics, and computers can theoretically process any mathematical pattern, people can be simulated within computers. People may argue that the simulated person is not actually conscious, but that would mean that we would have to assume the existence of some property of the universe tied to physical matter (biological matter, in particular) providing consciousness, which, in my view, adds unnecessary complication to the issue. Regarding responsibility for actions, it is important that we do not draw a distinction between ourselves and patterns. We are not stuck following patterns, we are the patterns.

  45. just the fact that we can ask the question "do humans operate like computers?" means we are more than just computers. it is classical conditioning, especially in our formative years is what programs us. the way we become a human is to question and if need be change the way we react and think about things. like when we make the hard decision to change our view on a variety of things we were programmed to be true on purpose or by our social milieu. we gain a "soul" when we do this. i'm not talking religion here either.

  46. Holographic principle and CAT scan that the have no free will, decisions made 3 or even 6 seconds before we aware of this decisions, that realy spooky. Holographic principle math based on Fourier series says that brains and ALL body is continuous flow or analogical signal Fourier series are continuous and analogical like everyday reality no jumping pictures in my world but also sin waves are squares and triangles like computers signals of ON and OFF that computer working all that ON 1 and off 0 made of sum of continuous frequency sin sums. So even muscles and all body have memory brains is like processor or switchboard mechanism. So it is why robots are so bad computer land rover have capacity of restarted cockroach (Kaku qoute). Of couse to make robot from organic matter is it possible or jus another science fiction ? In 1976 people said we need much faster computers to create Artificial intelligence and we have it computer NOW are 100 and 1000 times more faster than human brains and ROBOTS SuCKS BIG TIME.

  47. I am flawed, but yet I am more than a simple machine for I , though flawed, am self-aware of such fault and I am capable of learning from what I had done through observation. I, though flawed, can do what no machine can do yet and that trait that allows me such commodity is that I have both awareness and morality. For a machine is no more aware of the implications of their code in their hardware unlike I that can understand that the self that is I is made up in sum through the combination of nurture and nature of my genes. And I, if I see that my actions befall ill will to one that does not carry me any pain, can adapt. Though this boundary may come to be blurred in the years to come, so who am I to draw conclusions.

  48. Another way to compare operation of humans and computers is the way we experience reality. Computers experience the universe in frames, so time is not linear to them. Humans experience time in a linear fashion because our experience is continuous and goes from the past to present.

  49. Except what you do "with your will" is itself determined by all of the things that happened before and affected you (directly or not).
    Welcome to Hard Determinism, the fact that you're here now was calculable a billion years ago.

  50. Unless I'm mistake, didn't humans come before computers? So shouldn't the question be "Do Computers Operate Like Humans?"

  51. I had this theory myself before addressing the consciousness unit of philosophy class! By looking at it from this perspective, I reasoned that it is possible for a computer to be conscious (in the future, not yet obviously), because human brains are really just incredibly complex organic computers that send electrical signals around.

  52. The sense of "ought" really bugs me. There are "priorities" in programming – stuff that makes microprocessor work or, say, ctrl+alt+del function above any (most visible) operations.
    I have low health -> I want this healthpack -> I take it – basic programming.
    I have low health -> I want this healthpack (priority high). This line of thought and action means I have no free will -> collapse of systems identity/technical details (enter close-to-ultimate-but-not-really prioritised self-protecting conclusion-mechanism) -> I "choose" not to use healthpack – much more complicated level of programming, but still programming. Same but more advanced version of this line may as well work for programming "ought" to sustain your ally.
    The computer-based theory has a lot to offer in terms of reasons behind morality and actions. After all programming IS based on dividing and analyzing human thoughts.

  53. I would argue that the only reason a lot of people think they OUGHT to give up the healthpack is because they gain something from giving it up. Which could be programmed. If you give it up, you have a better chance of your friend surviving to help you, or you don't damage your friendship with someone because you are a greedy asshole. You choose to let them have the healthpack because it helps you achieve the goal of beating the game, or, ya know, having friends, which humans need and crave.

  54. Notice the fine print at 0:54 where he lightly makes the assumption that beings are subject to *efficient causes*. His whole theory is predicated on this shaky ground. Aristotle's other causes (formal, final) are much more applicable to living beings. We are not perfect billiard balls knocking into one another in a neat geometric space. He led us down the garden path for centuries with this red herring. He really was a bit of a Kant alright.

  55. As others have stated, we (AND computers) can simply be programmed for what to do in such situations. There is no way I can give this anything but a thumbs down.

  56. Interested in philosophy? Be more than a mere automata. Embrace your autonomy. Check out Hyperianism.

  57. they can still oute and be a computer prioritising what is batter like a car making destitution if hitting the boy will save more humans

  58. You don't need to reduce the complexity of the human mind to algorithms to control it, instead you can manipulate it based on goals and exploit the inefficiencies of the human condition. It's why herd mentality exists and why cults can be made despite seeming so idiotic through hindsight.

Leave a Reply

Your email address will not be published. Required fields are marked *