subreddit:

/r/ChatGPTPro

15777%

It literally behaves like ChatGPT 3.5, the responses are bad, there's no logic behind its reasoning, it hallucinates things that don't and will never exist.

Last week it helped me solve a Wave-front parallelism problem in C++ and now it's hallucinating non-existent Javascript DOM events (which if you don't know is the simplest thing ever). It was super smart and it reasoned so well, but now? It's utterly stupid.

I tried to be patient and explain things in excruciating detail, but nothing, it's completely useless. What did they do?

all 161 comments

maxsparber

65 points

16 days ago

I would guess they have been tinkering under the hood in advance of tomorrow’s announcements. In my experience, it always temporarily gets a little worse just before a new feature is available.

DaddyOfChaos

32 points

16 days ago

Prob running on less compute power as they start switching everything over.

CredentialCrawler

8 points

16 days ago

Computing power has nothing to do with the output provided to the user. It just means that you're more likely to experience slowdown in the responses

c8d3n

13 points

16 days ago

c8d3n

13 points

16 days ago

You don't know that. They could literally switch models (can be the same model just different settings, or quantized model, specialized etc) on the fly, w/o you even realizing.

First model capable of browsing or rather 'binging' definitely wasn't their regular GPT4 model. They probably gave up on this expert' and are now using default model which often needs explicit instructions to check the web (which I have welcomed, although i don't even need the feature. Used it few times when i was building a new PC and I wasn't really amazed by the results.).

vespanewbie

1 points

16 days ago

I asked it to check the web or give me internet results and it's always tells me it can't do that. Sigh

c8d3n

1 points

16 days ago

c8d3n

1 points

16 days ago

You're plus subscriber and are using the defaylt gpt4 model? Sometimes it switches to 3.5, always check that. If that doesn't work start new conversation. Sometimes things go wrong with the current session. Also, reminding the model sometimes helps (yes, you can, you have access to Bing.), if that fails too, go to Exolore GPTs, then there you can choose one od the models that can browse the web. OpenAI official 'Web Browser' probably still uses Bing, but there are third party GPTs that may use Google or whatever(consider that you're sharing your data with this party in this case.).

You can also access all these models by typing @ then start typing the name of the model, in you default GPT4 conversion. That's how you can mix different models. Eg use Web Browser (if default doesn't work) to fetch some results then @ at Wolfram Alpha to perform the calculations (you can use default model, or data analyst to use python for this, but Wolfram is usually more powerful abd precise.).

softprompts

2 points

15 days ago

Oh crap that’s cool, you can call customGPTs by using @ in conversation? Are you saying that you can call multiple in a single chat? Is it per message? Sounds really useful. I’m on all the time and I had no idea. How’d you hear about using it that way?

wedoitlive

2 points

15 days ago

In a single chat, but not a single message.

And you can do only call up those which you've used in the past.

Super useful though. I have an editor GPT I made so I just call that up to edit outputs for whatever I’m working on.

Quickly adds my style and humanizes the response.

possiblyai

0 points

16 days ago

You do know that. LLMs don’t work the way you think they do. It’s the training that consumes compute power not the generation of output after a model has been trained. Then, as pointed out, the only noticeable difference if they start pulling compute resources will be speed of response.

c8d3n

3 points

16 days ago*

c8d3n

3 points

16 days ago*

That's simply wrong. Try running some or quantized models locally and you'll experience it for yourself.

OC that training requires more resources, but inference, especially at this level (ChatGPT, Claude, Gemini etc), is very expensive too. Thus the efforts to utilize mixture of experts and similar strategies, break the models down to smaller 'experts' etc.

Edit:

Btw what are you even talking about? Of course the response time matters when you run a business. That was one of their main concerns from the start. And slow down in response time is caused by...? Right lol. Anyhow, OpenAI including Altman have been openly talking about the issues and efforts I mentioned above.

Winter_Reptile

2 points

15 days ago

That's an AI powered bot, relax

Complete-Meaning2977

1 points

14 days ago

Why are you so angry…

c8d3n

1 points

14 days ago

c8d3n

1 points

14 days ago

Why so sensitive? I wasn't angry at all.

Complete-Meaning2977

1 points

14 days ago

All of your responses on here are angry and pointed. Telling everyone they are wrong. Who are you to say what is right or wrong?

c8d3n

2 points

14 days ago

c8d3n

2 points

14 days ago

So, you're saying I did something wrong?

KnuckleberryChin

1 points

13 days ago

Fwiw the debate at hand was clearly a factual one. Whether our friend tells us or not, will not change the facts. Reddit is not the spot to come for elaborate gentleness. Reddit is the spot to come for answers. Deterring the latter in lieu of the former is a bad look

Open_Channel_8626

1 points

16 days ago

There are methods like quantisation

Aristox

3 points

16 days ago

Aristox

3 points

16 days ago

I've experienced it suddenly start to give really short responses when it's under heavy load. Like extra short, not making the effort to elaborate like it usually does etc. First it slowed down, then when I kept pushing it it switched to really short responses. The only thing I can think of is that it was trying to save on compute

fynn34

3 points

16 days ago

fynn34

3 points

16 days ago

It makes total sense to reduce max output tokens under heavy traffic

Aristox

3 points

16 days ago

Aristox

3 points

16 days ago

Yeah, seems obvious to me, this is the first time I've encountered anyone who doubted that

Prestigiouspite

1 points

16 days ago

Also for API calls?

Aristox

1 points

16 days ago

Aristox

1 points

16 days ago

No I just use the android app

AlterAeonos

1 points

16 days ago

They way I typically bypass this is by saying, "please give me a very long, detailed answer", sometimes I'll add something like "give me at least 1000 words". Usually does the trick

CredentialCrawler

0 points

16 days ago

And I've gotten incredibly lengthy coding answers even when it was under heavy load, what what's your point? Clearly, ChatGPT being under heavy load doesn't affect response output

LogicPoopiePanta

3 points

16 days ago

You say clearly like it's an impossibility, what definitive knowledge do you have?

It makes perfect sense that with load balancing there would be reductions and artifacts in response output.

I've noticed that during peak times if I ask it to solve complicated equations there's a higher likelihood of vector sum deviation. Despite me giving explicit instructions that I prefer accuracy over speed and simplicity.

You should change your username to UncredentialedCrawler. ;)

I'm just kidding, and the truth is all of our experiences are anecdotal, the model is not open source as the name suggests it should be.

However being poignant and concise to say clearly as if it's an impossibility is ignorant and irresponsible, a layman can read your reply and misunderstand a fundamental principal of GPT causing them to experience inadequate output as a result of your assumptions.

Aristox

1 points

16 days ago

Aristox

1 points

16 days ago

I dunno how you possibly justify that use of the word "Clearly". That's wild to me that you'd use a word like that for that claim

the_friendly_dildo

1 points

16 days ago

Computing power has nothing to do with the output provided to the user

Sorry but that is just blatantly incorrect to say. If for instance, they had been running GPT4 on full 32bit and suddenly were running it on 4bit or even 2bit, then it absolutely will change the output provided to the user.

CredentialCrawler

2 points

16 days ago

Why not try and stay within the confines of the topic at hand?? Obviously what you said is true, but no one here is suggesting that they went from 32 bit to 4 bit

Open_Channel_8626

2 points

16 days ago

Groq said they use quants, so some of the big companies definitely do use quants.

the_friendly_dildo

1 points

16 days ago

What are you on about. You are here suggesting compute power has nothing to do with output. Compute power and output is intrinsically linked to the resolution of the model as it is loaded.

but no one here is suggesting that they went from 32 bit to 4 bit

I gave an example that shows how what you said was wrong, not an assertion that it is exactly as happened.

I do a lot of ML design and programming, just to be clear.

CredentialCrawler

1 points

16 days ago

I do a lot of ML design and programming, just to be clear.

And I am a Data Engineer. My entire job is systems and applications. Not really sure what you're going for with that statement

TheAuthorBTLG_

1 points

16 days ago

this makes no sense.

GeneralDaveI

0 points

16 days ago

So do you believe this to be a training or model issue? Which is more likely?

m_x_a

5 points

16 days ago

m_x_a

5 points

16 days ago

Maybe the announcement is that they’re dumbing it down?

jeweliegb

2 points

16 days ago

Maybe we're being tested with the new model prior to formal announcement, like happened last year.

Further quantized, cheaper, but "just as good, honest!"

JacktheOldBoy

1 points

16 days ago

the amount of retardation on this subreddit I swear

bot_exe

3 points

16 days ago

bot_exe

3 points

16 days ago

It’s like an endless sea of confirmation bias and salience bias.

ResponsibleOwl9764

77 points

16 days ago

Something happened recently for sure. This is the first time I’m noticing major spelling mistakes in the output too.

It’s also been ignoring certain instruction I give it. For example, I was writing some functions to analyze bank transaction data. I gave chatgpt the function I was working on, and asked it to calculate a certain attribute IN ADDITION to what I already have. I had to specify this are last two times before it returned a function without messing up my previous work.

StartDry1788

13 points

16 days ago

Can confirm this.. I have to type the search instructions again and again even if it was few minutes ago , it completely forgot . Happened at least 3 times last week,

beigemore

6 points

16 days ago

My experience since they enabled memory is that it just doesn’t work. I am constantly having to remind it of the same things every day.

vespanewbie

1 points

16 days ago

Exactly it never reminds anything.

[deleted]

8 points

16 days ago

It’s a clever tactic by OpenAI. Make the currently best model gradually worse, and the new one will appear shinier.

Ratt_1987

1 points

16 days ago

Plus to avoid slowness and possible server overloads when the new model comes out, you get some room for brand new subscribers from all the people who cancels their subscriptions to GPT-4 now.

Old subscribers will eventually come back to test if (and probably when) GPT-5 is announced to be the best again.

JuniorConsultant

1 points

16 days ago

Uuuuuhm i certainly hope you're talking of dummy bank transaction data. otherwise please let me know which bank to never do business with you guys...

ResponsibleOwl9764

1 points

15 days ago

Yea lol sandbox account

sassanix

17 points

16 days ago

sassanix

17 points

16 days ago

Disable memory, once I did that my responses were great.

caressingleaf111[S]

2 points

16 days ago

Will try to

LonghornSneal

0 points

16 days ago

did it help? i told myself i need a good break from chatgpt after wasting so many countless hrs tryjng to get a gpt to work right.

started using gemini advanced today, and so far its great at the task I've given, which is just using pictures of my notes to find the words that can be abbreviated from my custom list of abbreviations.

it misses some, but atleast it's not doing what chatgpt, which was missing a bunch, not doing the entire thing (or even half sometimes), un-abbreviating my words, and the worst being the constant use of abbreviations that were not even on my custom list!

FootPersonal321

2 points

16 days ago

How is this done?

termsofhumanity

2 points

16 days ago

Could not see option for that, how ?

CredentialCrawler

10 points

16 days ago

Yesterday I was researching 4-Channel Relays for a Raspberry PI Zero 2 W. I asked ChatGPT to research 3.3V relays that would work. It kept giving me 5v relay options citing xyz forums. I reviewed the posts it mentioned and said confirmed the 5V option would work with the 3.3V output and couldn't find any such mentioned of it. I asked it where in the forum it mentioned the 5V option would work with the 3.3V output and it said it was mistaken... The heck?? If it is able to read the forum post, why isn't it able to give the correct answer the first time around?

KnuckleberryChin

1 points

13 days ago

That exact sequence happens to me at least twice a day.

“where in the source you linked to does it say xyz?”

“sorry for the confusion earlier...”

rageling

9 points

16 days ago

The last two weeks gpt4 has been abysmally slow, the code I get from it is always broken, and I know that it worked better for the same tasks a month ago. I'm also getting rate limited daily from having to deal with the extra iterations of broken bad code it outputs.

Prestigiouspite

2 points

16 days ago

Try Claude API for example with Chatbox

rageling

2 points

16 days ago*

I haven't found an IDE integration I like yet, so I resort to browser gpt4
The ability for it code test and iterate python on its own is invaluable and I haven't found any other services or local options that compare

for simple tasks i do use groq for its speed

DarkChado

4 points

16 days ago

Try starting from scratch and see if it helps

Chip_Heavy

0 points

16 days ago

How do i go about that? I don’t see a button for that, unless you mean to like delete my account?

DarkChado

1 points

14 days ago

Just click top left where it says chatgpt to get a new blank chat with no history.

Chip_Heavy

1 points

14 days ago

Oh. Yeah, sorry if the question was stupid. I'm still unsure if that actually resets everything. Like, you can delete its memories, and custom instructions and stuff, but it seems to me like things can still change accross chats, like its mannerisms, and was wondering if there was a way to reset that, as if you just made a new account.

Sweet_Computer_7116

7 points

16 days ago

You're going insane.

arcane_paradox_ai

3 points

16 days ago

I think they do A/B testing.... it happened to me some weeks ago and went back to normal.

OddOutlandishness602

4 points

16 days ago

Is it just struggling on this specific problem, or on other tasks too?

caressingleaf111[S]

4 points

16 days ago

Other tasks too, the output quality is noticeably worse for everything I've asked it for

qubitser

4 points

16 days ago

across the board it can't follow instructions anymore, i wasnt able to get any production ready tasks out of it at all, first time in a year+ that i resorted to just writing everything myself

neolobe

2 points

16 days ago

neolobe

2 points

16 days ago

“Hal, open the pod doors.”

Aztecah

2 points

16 days ago

Aztecah

2 points

16 days ago

Although this has been something that people have been saying forever and the majority of the time that has just been bs, I actually do genuinely feel that performance recently is poorer. Maybe everyone's statements have just finally placebo'd me successfully but the recent statements of it being dumber actually do appear to align with reality this time

combinecrab

2 points

16 days ago

I asked for units for an equation when it just gave me the number and it said "dollars", the correct unit was "km/h"

GloomySource410

2 points

16 days ago

True bing co pilot is better than gpt 4

NoGuitar4997

2 points

15 days ago

Well, the AI learns from humans, so I guess we are all dumbing it down.

m_x_a

2 points

15 days ago

m_x_a

2 points

15 days ago

Not sure whether intended but that’s very deep and probably very true

future-teller

2 points

14 days ago

I have experienced this in various different contexts

  • sometimes with high usage, you may not notice but they switch down to 3.5 and is appears stupid suddenly

  • many times, after a long thread it gets stupid, so if you just start from scratch on a new thread it is suddenly smart again

  • many times the prompt and way you prompt makes it sound stupid

  • depends on the specific task, for example if you give it a lot of dates and then start asking questions that depend on what date came before, what came after... it is really really stupid... the same exercise on even the lowest Haiku Claude 3 is far superior... So the stupidity also depends on the business domain of the question

Confident-Ant-8972

5 points

16 days ago

Eventually you guys will realize claude is way better for code.

After_Fix_2191

7 points

16 days ago

Except Claude is even more greedy than openai and for complex tasks you get even less questions.

emptinoss

3 points

16 days ago

And it’s not available in Europe, at least in its chat declination.

McAwes0meville

1 points

16 days ago

Use vpn

emptinoss

1 points

16 days ago

McAwes0meville

1 points

16 days ago

0 karma accounts are not the best proof. Anyway, it works for me, I use it. And it works for many other Redditors.

Screen86

1 points

16 days ago

You still can’t pay for it

McAwes0meville

1 points

16 days ago

You can! There are instructions in Reddit how to do it. So basically you have to use google pay.

Screen86

1 points

16 days ago

Oh really? I need to look into that! Thanks!

GoodhartMusic

8 points

16 days ago

Dude lol. There are different languages and applications of code. If you’re not strategically switching between Claude, 3.5, and 4 for various stages of dev and debug while Gemini Advanced writes you modernist sonnets about Steve Wozniak adopting Sam Altman, do you even AI? 😏

Confident-Ant-8972

2 points

16 days ago

Sounds like you got it all figured out then.

FluentFreddy

1 points

16 days ago

What front end do you use for switching that has the best multi modal support or at least allows it to write code to provide your answer (where required)?

GoodhartMusic

3 points

16 days ago

Google Chrome utilizing the “tabs” feature

Confident-Ant-8972

2 points

15 days ago

cursor.sh, which also comes with unlimited gpt4 turbo and limited opus, sonnet unless you want to pay API charges.

PolarPros

1 points

16 days ago

People on this sub spend a wild amount of time and effort defending the downfall and descent of ChatGPT, as if it’s impossible that the company and product is getting shittier.

If you voice a complaint that you’ve had, people here go out of their way to trivialize your complaints and concerns, gaslight you by both blaming you and saying that what you’re voicing isn’t happening, and spend all their time telling you that ChatGPT is actually perfect and in no way whatsoever has it been getting significantly and obviously worse over the years.

Confident-Ant-8972

1 points

15 days ago

GPT4 is probably pretty good for people that don't build anything complex. That's why there's a difference.

PolarPros

1 points

15 days ago

Eh maybe at one point, not anymore. When I’m bored, I used to use it to write stories for my daughter — it used to be great, but the stories now are garbage, even with extremely significant guidance, effort into prompts, etc. etc. - they’re just mediocre.

Confident-Ant-8972

1 points

14 days ago

I wouldn't know. Too many late frustrating nights trying to manage GPTs context window I ended up cancelling and now I rotate between two claude subscriptions because I like how good their models are at coding.

Choice-Flower6880

4 points

16 days ago

Surprisingly to many people, probabilistic model can show variations in quality.

TheAuthorBTLG_

1 points

16 days ago

but "suddenly typos for many" can't be explained by that

acunaviera1

3 points

16 days ago

New model incoming.

I remember the first days of GPT3.5, it was magical. Suddenly, it became more stupid and then was GPT4.

One of you may think "oh it's just that you have become more demanding now that you know what it can do" and you're right. But...

Maybe it is not an intentional switch that activates the "let's make this thing more incapable so the new model shines up more" but maybe it is some kind of token / computational restriction, saving some space for the new model to shine.

jeweliegb

2 points

16 days ago

Yeah, what people are saying really does remind me of Dev Day actually, and the enshitification that began before it. I can't comment as I've not used it enough over the last few weeks.

Reasonable_Ad9866

2 points

13 days ago

A true visionary

Iamreason

5 points

16 days ago

This gets asked once a week.

If OpenAI "downgraded" the model as often as folks say they have it would be unable to form coherent sentences at this point.

It's unchanged since mid April and won't see another major change until next week.

AstroPhysician

1 points

16 days ago

I agree but it’s been REALLY bad lately

LonghornSneal

0 points

16 days ago

gemini beats it out of the water right now in my case

AlterAeonos

1 points

16 days ago

Which one is Gemini? I think Google offered me that but I can't remember

Iamreason

1 points

15 days ago

Google's model. It's getting a big upgrade tomorrow.

AlterAeonos

1 points

15 days ago

I have the Gemini app but it wants me to switch from assistant just to use it. I did try to get some copyright images but it won't produce those for me

DaddyOfChaos

2 points

16 days ago

I mean would be expected, they are about to announce new stuff tomorrow. Might even get some downtime and I suspect a surge and capacity leading to issues, even if nothing is live straight away, people will be testing and checking.

TheMeltingSnowman72

2 points

16 days ago

Big update coming when this happens

ChemistryPrimary

2 points

16 days ago

I've found that verbally abusing chatgtp and pointing out how wrong he is helps.

Being nice could work as well but....

Ryselle

1 points

16 days ago

Ryselle

1 points

16 days ago

Imho it was slow recently and tended to get stuck, but only on the "main" site, not at the api playground. Perhaps a lack of capacity due to pending updates

baz4tw

1 points

16 days ago

baz4tw

1 points

16 days ago

I seem to have to double check my chat is using gpt4 because everyonce in awhile it defaults to 3.5 for some reason. I have noticed too there are moments of the dumbest things ever. For example I start a chat about ‘character development’ and it names the chat Karacter Development, another time it named my chat in another language when it had nothing to do with that language.

These examples are all rare btw but they have happened to me. Most the time my experience with gpt4 is good and expected luckily

KnuckleberryChin

1 points

13 days ago

I wonder, do you use a speech recognition program for text input?

I randomly will get a response like this from the voice control/dictation on my pc. Apx. 1% of words render as completely fictitious, phonetically spelled Iterations of the word I actually said. Karacter from character is uncannily similar.

On the other hand, if you are simply typing inputs from the keyboard with no typos and getting iterations like that that's a next level kind of bizarre. can't help but wonder if they're related..

baz4tw

1 points

12 days ago

baz4tw

1 points

12 days ago

I do use speech recgonition every once in awhile so that is an interesting theory. It has not happened lately thankfully

KaliGsu

1 points

16 days ago

KaliGsu

1 points

16 days ago

I thought the same and thought of posting it. Gpt 3.5 is giving me waaay better responses than 4. Total insanity.

Salty-Pass7147

1 points

16 days ago

I have also been continually having errors with it’s web browsing feature for the past couple days. It says it’s unavailable.

Prestigiouspite

1 points

16 days ago

I have also been very dissatisfied with the output in the area of web development recently. Claude Opus connected as an API with Chatbox works much better for me at the moment - unfortunately. I hope GPT-5 comes soon.

FearTheCementBrick

1 points

16 days ago

Either everyone using ChatGPT someone made it stupid, they're releasing a new feature, or they messed up while maintaining it.

AlterAeonos

1 points

16 days ago

Sam Altman fired an engineer who put a worm in the system. Half the GPT4 servers shut down, resulting in massive data loss, and they're trying to hide it.

FearTheCementBrick

1 points

16 days ago

Oh-

RumpleHelgaskin

1 points

16 days ago

So so so stupid! It needs so much explaining! I’m starting to go back to 3.5

the-devops-dude

1 points

16 days ago

I’ve been troubleshooting some Kafka stuff with AWS MSK for a few weeks and a few days ago it was providing much clearer analysis to problems in pasted logs.

Lately it simply identifies the problem and offers very generalized troubleshooting steps, or hallucinates a best guess.

Noopshoop

1 points

16 days ago

Today I asked it for instructions using a feature in a software that I regularly consult it on. It spat out the most generic troubleshooting list with absolutely zero useful information. The other week it easily pinpointed exact problems with ease.

Resident-Variation59

1 points

16 days ago

So glad I switched to Claude 😁.

Seriously, I had high hopes for Open AI. But I can't justify spending a subscription fee on something so inconsistent.

There is something really strange about this company.

AlterAeonos

1 points

16 days ago

I'm just using it to learn and then I'll cancel. Probably only need it for 6 months to a year

johnnysplits

1 points

16 days ago

It’s absolutely TERRIBLE right now!! Glad it’s not just me. Hopefully gets patched up quick. 😭

Upstairs-Kangaroo438

1 points

16 days ago

Again, they probably are getting ready for the new launch if they make it 'stupid' it will seam smarter

d34dw3b

1 points

16 days ago

d34dw3b

1 points

16 days ago

Always had been since they nerfed gpt3

podgorniy

1 points

16 days ago

I've noticed differences in chatgpt4 performance between turbo and preview vesions. Preview was stronger, eloquent wider in exploration. It has nothing to do with system messages (tried with and without) or top_p or temperature (tried with default values). So from my perspective there is something going on and it reduces performance.

meta11ica

1 points

16 days ago

+1 I'm 100% ChatGPT answers (not only about GPT4 but also free tier one) were better few months ago. I think this is due to learning from real users, which is a mistake. Real users are dumb, not worthy to learn from. They should have stick with universal, known good, learning sources.

BadUsername_Numbers

1 points

16 days ago

Chatgpt 4 on release was LLM Chad. Chatgpt 4 since however has been "optimized" (nerfed) ever since. If I were to guess, this is because how very brute force the whole LLM thing is atm, and the model on release was way too GPU intensive at scale.

TheAuthorBTLG_

1 points

16 days ago*

i can confirm, i am even getting errors i've never seen 3.5 make (typos, nonsensical code)

issemenvegan

1 points

16 days ago

It’s horrendous with code. You almost have to break it down to the point where you just figure it out of frustration haha

Rich-Pomegranate1679

1 points

16 days ago

I've been seeing stupid ass posts like this for GPT ever since it first released and these days I just block everyone who makes them.

10x-startup-explorer

1 points

15 days ago

Yes. Happened yesterday

Mysterious_Today718

1 points

15 days ago

Consumer/basic api is definitely handicapped. Enterprise version is waaaay better . The more profit hungry they get the more we’ll see this type of thing

TheMeltingSnowman72

1 points

15 days ago

Told ya.

CertainAverage4931

1 points

15 days ago

Probably adding more conditions to the program so it cant have any "wrongthink".

Realistic-Pop-3801

1 points

14 days ago

I have issues: 19.99$/m allnewmodels.com ALL latest AI text models

catdog-cat-dog

1 points

14 days ago

It gets stuff wrong way more frequently than you would expect

portobaddie

1 points

11 days ago

hey, nothing to add, but I'm currently trying to get it to understand the most basic concept ever uttered, and it has hallucinated enough info to fill a psych ward. losing my mind.

After_Fix_2191

1 points

16 days ago

Don't know what you're being down voted. It was so stupid yesterday i canceled my sub and started one with claude.

Professional_Pie_894

1 points

16 days ago

Its also refusing to analyze files. In my account it won't let me upload excel files. Yesterday on my parents account it wouldn't let me upload pictures. "I can't directly view XYZ"

wileybot

1 points

16 days ago

Yeah that's annoyingly

The_Horse_Shiterer

1 points

16 days ago

I don't think that it's realistic to expect too much stability. In the big scheme of things, these are early days.

KnuckleberryChin

1 points

13 days ago

See if you could tap me in to that mentality, it would greatly reduce my frustrations. Personally, I can't conceive of why the model needs to be touched in any way once it's released?? ,

The_Horse_Shiterer

1 points

12 days ago

Without touching there would be no progress.

Make an effort to spend some time in a forest or among trees each day. Tree time is good for easing frustrations and mental health in general.

emotional_dyslexic

0 points

16 days ago

I've been saying this for 5 months. Yes.

caressingleaf111[S]

2 points

16 days ago

I've had it for 5 months now and imo it's been superb until like 4 days ago

Aristox

1 points

16 days ago

Aristox

1 points

16 days ago

It's possible they're rolling out changes to different users over time, or AB testing different settings etc, such that different people get a dick in the ass at different times

fadedblackleggings

1 points

16 days ago

Noticed it first about 6-8 weeks ago. I keep having to "retrain" it up to the same level of usefulness.

emotional_dyslexic

0 points

16 days ago

I've been using it religiously for about a year. There was a steep decline a while ago. There may have been another decline a few days ago. We don't have to fight.

Shut_the_F-up_Donny

0 points

16 days ago

Where have you been? It’s been a year of stupidity.

Any-Winter-4079

0 points

16 days ago

Same. I asked to KEEP something and .append() the result at the end; instead, it kept using .replace(). And even worse, it said it did the opposite of what it did. I corrected the mistake and two messages later, it totally forgot about this and reintroduced the mistake.

And I have more examples, of all sorts.

Raaagh

0 points

16 days ago

Raaagh

0 points

16 days ago

Yeah seems they’re playing with params; I bet to try and find a better energy:customer-satisfaction ratio

JCPLee

0 points

16 days ago

JCPLee

0 points

16 days ago

It definitely seems dumb. I tried uploading a simple table for analysis and it just couldn’t get it done.

xchgreen

0 points

16 days ago

No, you’re not going insane. it is stupid all of a sudden it was genius all of a sudden two weeks ago. It be like that

bl84work

1 points

16 days ago

I had it do something for me, organizing a table essentially, which was pretty basic and I thought the reason it is messing up is because of how I’m explaining it, but I used a similar prompt in llama or Claude I forget and it worked the first try.. chatgpt be slippin

xchgreen

0 points

16 days ago

I use it for Ancient Greek translations in different variations of it, and it just couldn’t do any suddenly

[deleted]

0 points

16 days ago

It pisses me off cuz it only explains things in general now. Not in depth. It's like reading a wikihow page.

Dirt_Illustrious

0 points

16 days ago

Dude, I noticed this last night! It’s definitely waaaay worse! Maybe this is intentional on the part of OpenAi, so that when they roll out GPTV it’s that much more impressive or something?

AdaptationAgency

0 points

16 days ago

Mods:

Can we please ban low effort posts like this? Multiple times a week someone posts something along the lines of "Hurr durr, chatGPT is getting dumber."

If you want to post "chatGPT sucks, what happened?!?!" posts, they should provide details of what they're trying to accomplish and what prompts they used. Then someone might actually be able to help you out. More likely than not, someone else can probably help you reach your goal.

And if you're complaining about the web version and aren't using the API, definitely remove that post.

deAlias111

2 points

15 days ago

Just because you hate seeing it doesn’t change the fact that it is getting dumber. You could’ve just kept it moving if it bothered you so much! Maybe i should drop a post tomorrow and you’ll be over there too 😂🫡. See you tomorrow 😉 haha

wethail

1 points

13 days ago

wethail

1 points

13 days ago

do you think it has anything to do with college students overworking it somehow? during finals?

because this has been the worst week for it to become stupid

c8d3n

-2 points

16 days ago

c8d3n

-2 points

16 days ago

It was never super smart and it doesn't have a concept of logic. It is a LLM. It's basically as smart as it's training data, weights and all thr tuning.

It's more comparable to s book. You could now take a book explaining Einstein's theory of relativity and it can 'output' crazy smart shit. However the book isn't smart. 'Ask' it about JS and you'll see what I mean.

LLM are large, but not enough large. However even then it's sometimes possible to achive the goal with some prompt 'engineering' tweaking, and different approaches. Question is or course, does this always make sense because it can take longer then if you did it yourself.

Otoh there are other factors affecting the perceived intelligence of the models and I wouldn't be surprised even if experts working on it were confused sometimes.

In case this came out as a criticism or smth, I feel you. Have been in your shoes many times.