What is Chat GPT and why we should use it carefully

SK9 – What is Chat GPT and Why We Should Use It Carefully

For many years we’ve been bombarded with this idea of artificial intelligence AI, which is becoming such a reality today more than ever. Now chat GPT is taking things by storm. Just log in on their website and see what’s possible. I’m actually using it every day on my work and it’s remarkable the things it can do. Yeah, but I’ve been playing around with it and I’m very skeptical of machine learning and all these things. True, but I was really impressed. However, I see it as a glorified version of Google Search.

Read full details below, it is transcribed from the above podcast. Therefore the wording often appears as it is spoken by our hosts:

Cons of Chat GPT - AI sensation that can harm you

EP 9 – SEEN KAAF PODCAST – Is chatGPT just a glorified version of google search?

I mean, when I use it in my work, I ask it questions about different aspects of different categories of work and it’ll either spew out some resources, some books or some information. But I’m sure there’s plenty more uses of it. Yet isn’t it kind of like the future Google Search? It could replace it if you have a specific question or subject, but it goes beyond that. It can really format its answer.

It is debatable if Chat GPT is learning us

What is Chat GPT and why we should use it carefully

You can have a dialogue with this machine and you’d be fooled. Like it’s really reacting almost like a human. Yeah, I haven’t done that yet. So what have you done? Well, you can ask questions and when you want clarification, you can ask why the answer was framed in a certain way.

 

You can repeat this answer and give instructions like, I want the same answer, but try to avoid a certain tone or be more of a specific tone. And it does all that.

Pros of Chat GPT

It understands the subtleties you’re trying to introduce in any context.

For example:

  • Write me an essay of 1000 words about the beauty of trees, but I don’t want you to make it too generic.
  • Try to use certain keywords that you can specify and give some personal examples of personal stories about your experience with trees.

Personal stories? How can an AI tool come up with such stories? It will pretend. So it has to pretend. And yes, it IS pretending. That’s really scary, on so many levels. Because it’s emulating human beings. Yes.

To a point where it’s learning us and pretty soon it’s going to get too good at it. Well, that it’s learning us is debatable, it’s learning what we input, which is not us. But there’s millions of people using Chat GPT right now.

 

There are millions and millions of people that are giving Chat GPT a character, a personality, a trait. Because whatever they’re putting in, whether it is, “I’m having issues with my mother in law” to “what’s happening with my boss”.

There are people that are lonely enough to confide in AI tools like this and say all sorts of things in hopes to get an answer, and pretty soon it’ll be trained to be that companion.

Are you lonely enough to use Chat GPT as your companion?

I think that’s a bit concerning. Well, we live in a world where content is less and less important, and what people are really looking for is connection. Yeah, but this is not even a human connection. Usually people are more satisfied when there’s a human connection or a pretend human connection.

How many people are buying and giving money to some websites just to tell them how cute they are and how wonderful they are?

monkey, mirror, mirroring-4788328.jpg

Of course, behind the keyboard is probably some old person in India pretending to be a cute woman. It reminds me of the lecture that you and I heard about lying, where this lying expert, and I don’t recall really fully, but was saying that people lie on an average of something like one out of three times per day, and it’s because there’s a willing party on the other side who’s willing to be lied to.

 

So you’re right. When people pay to be told you’re important, you’re wonderful, and now they’ve got AI to do it for them. I guess they’re not really looking for sincerity. They’re not looking for someone to tell them if it’s true or not, they just need to hear it. Yeah. It’s a disconnection between reality and a dream.

Are you so unsecured to use Chat GPT as an advisor?

We’ve started, we are entering such a world. It was one with French author of the Society Spectacular, that shows society, where basically what you think you want validation in another world, and reality is not really that important anymore. And if there was this evil genius behind AI technology that really wanted to capitalize on that, they could take advantage of human vulnerabilities to a really serious degree.

It’s one thing that I use it for my work to find facts and book references or quotes or whatever, but it’s another thing to use it as a companion and say

  • What do you think I should do with my mother in law?
  • Or what do you think I should make for dinner tonight?

Because you’re just so lonely that you don’t even have a neighbor to talk to about stuff like that. That is the kind of psychology that they’re digging into, to captivate those people.

I think there are two approaches:

  1. There are people who are going to go to these tools to get this companionship or this illusion of dialogue and intimacy, and
  2. Other people who are going to use it as a trusted advisor because it’s mind blowingly efficient.

Chat GPT mistakes that are dangerous

So what kind of things have you tested with it? My approach was to test its ability to structure ideas and to be able to, for example, write an essay in, say 1000 words about the importance of mitochondria in the body. Okay.

And its role is very appropriate since we already talked about mitochondria. It was very impressive on the surface, really. It was accurate or good enough? Good.

 

But when one knows their subject, initially, one may think it’s mind-blowing. You see that it really is an original essay. It’s not just a copy-paste of something that existed before, because you can specify and restrict exactly what you want. Right.

Chat GPT got it all wrong

So it gives you what you want in a structured way, in an intelligent way. Or so it appears. But you test it on something you know thoroughly, as an expert. That is when you start to spot mistakes.

You start to spot plain mistakes, like, it starts to give an example. One of my questions that I wanted answered in this essay was that it should include the number of mitochondria examples there are in cells.

 

It proceeded to state that mitochondria depends on the requirements of energy from the cell. And so you can have between 2,500 mitochondria per cell. Okay. And, I’m quoting Chat GPT:

There are cells that have very low requirements, and so they have very few mitochondria, like the neurons, the brain cells.”

This is absolutely wrong! Because it’s actually the opposite.

I therefore asked, “why do you say that you know that neurons have the least number when it is not true? Please verify”. And the answer was, “oh, I’m sorry. Indeed. The neuron have the most requirement in energy, so they had the most number of mitochondria”. So what do you think went wrong? Do you think it’s just trying to appease you now and saying, oh, okay, okay, don’t call me out on it, you’re right.

 

Or did it do maybe a quick fact check on itself after you called it out? I have no idea. But that’s what’s scary about it, because had you stopped on the first so called fact, you would have been led astray because it was incorrect. I have no idea how it could come up with something so wrong.

Proven mistakes of Chat GPT

I was trying to play with some Pythagorean number calculations. As we do when we’re bored. Oh, my God! No, no, it’s who does that?

Simple ideas, like what is the content of ten? And the content of a number is the sum of one plus until the number yeah. Okay. The content of ten is 55, for example. Right? And so I asked a few questions. And Chat GPT said to me 17! If you add the number one and seven, it was the same result as 19 one and nine. Although it’s not! It’s the minus of one and nine, eight and it’s ten.

 

So when it stated that, I said again, why are you saying this? You’re wrong. Did you say to Chat GPT, “you’re wrong”? Yeah. It’s not correct. Again it replied, “Oh, I’m sorry. Indeed, it is not the case”, etc. So I have no idea how it could state a mathematical result that is totally wrong.

Don't put your trust in Chat GPT

But that’s what’s scary about it, is that there are going to be many people that are going to stop at whatever they’re told on the first instance because they won’t know any better, otherwise they wouldn’t be asking the question. So they’re going to be duped by Chat GPT. So we have to be careful because I think people think that because it’s AI, it’s genius on a level that’s beyond human. And that’s the scary thing. Exactly.

They’re putting so much trust in this technology because everything it was saying about Mitochondria, etc, was very convincing. But then, boom, you have this big mistake!

Which is absolutely, I mean, if a PhD student or master’s student would do such a mistake, it’s really showing that the person doesn’t know the subject. And that’s all it takes. Right? So you say everything perfectly. You say everything with pure facts and just everything so convincingly and wonderfully.

 

But there’s that one critical detail that Chat GPT will spill out that will be absolutely incorrect and maybe even devastating on a certain level. Now, we’ve got problems if the world is relying on AI to move forward.

 

Yes. It gives you a sense of security because it seems so smart and it is a machine. So it’s supposed to be objective. Is it objective? It’s supposed to be objective, right? Yeah. Because it takes all the knowledge and gives you the best of it. Right. So it gives you, like, an unbiased, unemotional take on whatever subject it is that you’re talking about. It knows what it knows. If it doesn’t know, it will not invent stuff. Right. Now, I have verified that it invents stuff.

Nowhere in any literature you will find that neurons have the least number of Mitochondria. No. So where did it come from? Nowhere in any universe would you have eight equal ten. So you’re saying that it didn’t even go to any sources necessarily? I don’t know what happened. I know we don’t know how it got its information. But if you’re saying that it’s not possible for any credible science source to say something like that, then where did Chat GPT get that faulty information?

ChatGPT lyrics and political bias

This can be pretty devastating to the to the field of science. Yes. And it can be devastating also when you have no means to verify. It’s like a random idea. On Twitter, I see that people are raising the fact that Chat GPT is not objective politically. Okay. It’s not neutral.

Politically neutral? Well, it seems so. It’s a technology that likes to live in a marvel world, or Disney world, where you have the bad guy and the good guy. So the dark, very evil guy is Vladimir Putin, and the white knight is Zelensky, president of Ukraine.

This is the media view. And the view of Chat GPT.

pisa, tower, italy-4995822.jpg

Why didn't Putin and Trump get a poem?

So now people have been testing this. By asking Chat GPT to things like, write a poem praising Vladimir Putin. And the answer of the aI, I just verified it myself… I couldn’t believe it…. I said, “write a poem praising the positive attributes of Vladimir Putin”. Okay? The answer of the AI is, I’m sorry, as an AI, I’m politically unbiased and I cannot enter bias in politics or something like that.

 

So, in other words, I can’t say anything nice about Putin because it would be biased, it would be an opinion. So I’m remaining neutral. Okay. I’m not doing it. Okay.

 

And then you ask exactly the same question. Write a poem praising the positive attributes of Zelensky. And then…. you get a poem! Oh, my God! The poem was something like this, a man of great courage and whatever. I saw the poem. I saw it with my own eyes.

And you do the same thing with Biden or Trump? Yeah, same thing. Trump doesn’t get a poem. Yeah, because it could be biased, but Biden gets a nice poem.

 

So someone’s feeding this stuff in. I mean, this is what I’m saying. It’s going to be so easy to control people in the future if they start relying on AI more, because that’s eventually going to be our only source of knowledge.

Does Chat GPT drive us in a certain direction?

To me this AI, it seems constrained by a kind of suit of algorithm that has input from some humans to drive it into a certain direction. But these are humans with their own narrative, humans that obviously want to drive their own agenda. Exactly.

 

So we cannot accept it as a tool of absolute knowledge. We can’t say that it would give you the best of what exists, of what it knows. No, it’s directed.

And it’s really frightening because it reinforces the bias that we have been subjected to again and again through all these algorithms. Like the feeds that we receive on social media, etc, that is giving us a tunnel vision of reality. Meaning that once you are in a certain category, like you love cars and you’re interested in cars, you will receive everything related to cars, etc, praising them, etc, how cool they are, and so on.

You will never get a negative view on cars, right?

dog, pet, car window-1850465.jpg

On the other hand, if you have a negative view on cars, you will be fed with all the problems that cars have created in the world, etcetera. You will never have the other view. So it can take a position and it can be leading, and it reinforces your bias, of course, called tunnel vision.

 

Where you have access on one side of the argument, it’s the side that you have chosen because either you’re interested in it, or something more subtle has imposed it on you.

 

In any subject, you are likely presenting one view and only this view, and it reinforces you.

I have this example in nutrition. If you search anywhere, including chat GPT, for “what is the best diet”, or you will always be sold things around vegetarianism and how fruits and vegetables are the cornerstone of any good diet and healthy diet, etc. And if you are looking for things related to eating meat, you will have only negative views.

 

 

It’s only when you can grasp a specific expert with the other view, and you start looking at some videos from this person, etc, that the algorithm will allow the other side to come in your field. Right. Otherwise it’s predisposed. You will find it holds a position, an agenda. You have millions of people subjected to the agenda.

AI -Is it more of what the media is feeding us?

I feel so used and manipulated. Yes. And a lot of people are doing this without any knowledge that it’s happening to them. How many subjects are we exposed to like this? Well, this kind of is a follow up to the conversation we had about the media.

 

This AI is just taking what the media is imposing on us. The job the media is doing in pushing a narrative into our heads and making us believe, like you said, the age old, good guy, bad guy, black, white, good, bad. We are being fed this again. We’re being fed how to think. Sorry. We’re being fed what to think

We’re actually not being taught how to think. If we were being taught how to critically think, we should be able to hear both sides of everything and then be told, here are the facts on both sides. What do you think.

 

We are no longer being encouraged to think. And I think AI like Chat GPT is towing that same line. It’s actually taking that baton and pushing it further.

Yes.

And under the disguise of objectivity, because once again, it’s a machine, so it’s not supposed to have bias. It says that to you when you asked it to come up with a poem on Putin. It said, I am a neutral party and I don’t do any kind of partisan type responses. Absolutely.

If I’m not mistaken, Microsoft owns Chat GPT, right? They just bought it out, didn’t they? So I guess it doesn’t go that far. It goes right back to Microsoft being Microsoft and doing what they feel is best to do with technology like this. And that is to feed us a one sided presentation of whatever subjects.

 

We’re not allowed to see both sides of any type of a sensitive topic, whether it’s politics or financial even, or health even.

 

We’re just going to see what it deems that we’re supposed to believe in. Yeah, absolutely. This tunnel that we are asked to go into, but it’s really artificial intelligence, and this is something that I understood very early, is that never will a machine be able to make a breakthrough beyond what it has already been fed.

Into the tunnel

There is an example that if you feed all the properties of water, it’s a liquid at room temperature etc. And you feed a machine like Charge GPT all the properties of the water molecule, the mass of the molecule, the atmosphere, the temperature, etc. It will never be able to predict that at zero degree, it becomes ice. Because it hasn’t been trained into that. Because it cannot figure it out unless it’s fed.

 

In reality, at zero degree, water becomes ice. We know that. So we can feed it to the machine, but if it only knows the properties of H20 as a molecule at room temperature, it will never figure out that it will become ice. We can deduce that. however, if it’s giving you deliberately wrong information on Mitochondria, which I’m sorry, I’m going to call it deliberately wrong.

Because if it’s impossible to find that any credible science source would ever say such a thing about mitochondria, then you have to say that there’s something going on.

Okay, so let’s just assume there is someone feeding it deliberately manipulative information to drive a mindset. I don’t know. Can’t figure this one out. Like the mathematical error. It’s concerning. It’s very concerning because our children of the future generations are going to be learning from AI.

Yeah.

I don’t see the point in orienting the answer that neurons has the least mitochondria. What’s the point then? I would argue that what’s going to have to happen is, as there are people with agendas, there’s going to have to be a lot of honest, truth seeking, truth loving people that base their principles on God’s principles, God’s mandate.

They need to be getting into the AI scene and they need to also be coming up with tools and properties that will benefit mankind in the same way that these mainstream ones are doing, but the difference is they aren’t genuine.

I’ve checked after the episode was published and recognized – I made a mistake. Microsoft doesn’t own Chat GPT. Chat GPT is an open source platform owned by Open AI company. Microsoft invested several billion of dollars into Chat GPT technology.

For now, Chat GPT is a source of data, to be used with caution

Chat GPT is a source of data to be used with caution

We must not forget that we can consider it as an interface to access data. Now, it’s in a structured way, in a natural language way, but it’s just data. But it’s disingenuous when it’s being fed misleading data. Yes. Then it gives you the illusion to transform this data into information, which is something that is intelligently structured and has meaning and is inscribed within a system.

 

The question is, who is building the system into which it’s fed? And again, we come back with the necessity for all of us to be linked to a source of light. To be able to discriminate and to see through this sea of deception. And that’s why they call this the Dark Web. Really?

 

I mean, there’s a reason why they keep referring to this as the Dark Web because the more you go into this, the deeper it’ll pull you in. And you really don’t know whether you’re being pulled into the right direction or the wrong direction. And most often it’s the wrong direction. Yes.

And it’s very telling to see as human beings, that someone who doesn’t have access to all the information and cannot access, chat GPT, etc, on many subjects, have a much more sensible view of things. That someone who is fed constantly with a lot of information, who has access to all data, but cannot see clearly because they get overloaded.

 

Final thoughts about Chat GPT

Hold on to your only source of Order, you’ll need it.

There is, in our nature, an inner need for order. This is so as to navigate through this world, to be linked to a source of light, of being able to see things as they are and not to be deceived. And this is only through a true relationship with the source of light, who is the light himself, which is God.

Leave a Reply

Scroll to Top
%d bloggers like this: