5

I lost my ability to learn anything new because of AI and I need your opinions

I feel like I’ve lost my ability to learn because of AI. It is now so easy to generate code that it feels meaningless to focus and spend time crafting it myself. I am deeply sad that we may be losing the craftsmanship side of programming; it feels less important to understand the fundamentals when a model can produce something that works in seconds. AI seems to abstract away the fundamentals.

One could argue that it was always like this. Low-level languages like C abstracted away assembly and CPU architecture. High-level languages abstracted away low-level languages. Frameworks abstracted away some of the fundamentals. Every generation built new abstractions on top of old ones. But there is a big difference with AI. Until now, every abstraction was engineered and deterministic. You could reason about it and trace it. LLMs, on the other hand, are non-deterministic. Therefore, we cannot treat their outputs as just another layer of abstraction.

I am not saying we cannot use them. I am saying we cannot fully trust them. Yet everyone (or maybe just the bubble I am in) pushes the use of AI. For example, I genuinely want to invest time in learning Rust, but at the same time, I am terrified that all the effort and time I spend learning it will become obsolete in the future. And the reason it might become obsolete may not be because the models are perfect and always produce high-quality code; it might simply be because, as an industry, we will accept “good enough” and stop pushing for high quality. As of now, models can already generate code with good-enough quality.

Is it only me, or does it feel like there are half-baked features everywhere now? Every product ships faster, but with rough edges. Recently, I saw Claude Code using 10 GiB of RAM. It is simply a TUI app.

Don’t get me wrong, I also use AI a lot. I like that we can try out different things so easily.

As a developer, I am confused and overwhelmed, and I want to hear what other developers think.

For my personal projects, I use AI for discussion only. I treat it as a better Google and sometimes a fake psychiatrist. I don't really fully trust it, so I verify afterwards. If someone else wants to vibe code, I mean please do so as long as you enjoy the process. Personally I enjoy the coding process so I wouldn't want to copy/paste code, let alone letting it write directly in some editor.

For my work I use it extensively. I use Cursor as a senior engineer who breaks down problems and only writes the parts that interests me. I trust AI with other parts and do a review afterwards. AI prompting is a real skill. I don't like it but I don't like my work either.

an hour agomarkus_zhang

I hear you here. I really do.

I don't know where you are in your career, me I am on the backend. But all the time I was working the constant churn of new tools/languages/frameworks and so on, the race to keep up with the vendors just wore me out. And despite all that, building software honestly never changed much.

I have been working with both Codex and Claude, and you are right, you can't trust them. My best practice I have found is constantly play one off against the other. Doing that I seem to get decent, albeit often frustrating results.

Yes, the actual building of the code is either over, or soon to be over. The part that I always considered the "art." I often found code to be beautiful, and enjoyed reading, and writing, elegant code all the time I was working with it.

But the point of code is to produce a result. And it's the result that people pay for. As you mentioned with the evolution of development in your original post, the process and tools might have changed, but the craftsmanship in operation with those using them did not.

You make a fair point that this abstraction is different — prior layers were engineered and traceable, and an LLM output isn't. But I'd argue that makes the human in the loop more important, not less. When the abstraction was deterministic, you could eventually lean on it fully. When it isn't, you can never fully step away. That actually protects the craft.

Until AI becomes a "first mover" god forbid, where there is no human in the chain from inception to product, there will always be a person like you who knows where the traps are, knows what to look out for, and knows how to break a problem down to figure out how to solve it. After all, as I have always said, that is all programming really is, the rest is just syntax.

2 hours agoemerkel

> Is it only me, or does it feel like there are half-baked features everywhere now?

This is the argument for actually learning, so you don’t ship half-baked code, because the AI isn’t good enough. The people who are telling you it is likely have a financial interest in pushing the narrative that AI will do it all.

> LLMs, on the other hand, are non-deterministic. Therefore, we cannot treat their outputs as just another layer of abstraction.

This is another problem. A lot of code is written so exactly the same thing happens every time. If AI is going in and changing the logic in subtle ways that aren’t noticed right away when updates are made, because no one understands the code anymore-that’s a problem. In hobby or toy apps, no one cares, but in production code and critical systems, it matters.

2 hours agoal_borland

Ive lost my ability to do basic and advanced arithmetic calculations or algebraic calculations in school.

That didn’t stop me from getting a phd.

If you think it’s all there’s to programming that llm spits out, then the problem is in you somewhere, not in llms.

an hour agoaristofun

Claude is a VM. Programming languages are dead https://jperla.com/blog/claude-is-a-jit

3 hours agoljlolel

As the article states, it is a "wild experiment". I wouldn't let AI control anything serious end to end. Also if Claude really becomes JIT, it is going to be an expensive one.

The idea is interesting though.

3 hours agodokdev

It's like a gym that has automated weights, and just going there to watch them being moved - our mental muscles aren't trained anymore.

3 hours agokrickelkrackel

Block the dns on router for two weeks and you will feel alive again

3 hours agolhmiles

LoL :) that makes sense, but what if there is a new AI model releases when I am offline.

3 hours agodokdev

You will not suffer much from not adopting a new AI model for two weeks after release.