With AI
Collections
- 90% of My Skills Are Now Worth $0 ...but the other 10% are worth 1000x
- AI is an Ideology, Not a Technology argues that artificial intelligence is better understood as an ideology rather than just a technology. It promotes a view of autonomous machine intelligence that could replace humans, but this is a mirage since AI relies heavily on human data and contributions. An alternative view is to focus on how people are central to developing AI systems through providing examples, problem-solving and understanding the technology.
- What can LLMs never do?
inability to perform certain tasks that seem simple for humans, like playing Wordle or predicting the output of cellular automata.
- A Front-End Engineer's Take on LLMs
the indeterminism and unreliable behavior of LLMs, which is unlike the deterministic nature of typical software development. The author thinks prompt engineering will go the way testing is going. - be a skill rather than a discipline
- Group chats rule the world
— Salons and groups have always existed but why the recent shift to private discourse?
Because the public internet makes it too easy for people to sell you stuff, extract value from you, or harass you. Private chats are human-scale environments where understandable social norms---not commerce, algorithms, or formal rules---drive the interactions. It's an authentic experience. # - The Expanding Dark Forest and Generative AI
As AI takes over the public internet (the trees) the people will retreat to safe underground spaces where they know only authentic humans live. #
- How I Use "AI"
- The Effects of Generative AI on High Skilled Work: Evidence from Three Field Experiments with Software Developers
- AI is an impediment to learning web development
- AI won't replace human devs anytime soon (twitter.com/skeptrune)
- I’m Tired of Fixing Customers’ AI Generated Code
- Tog's paradox (also known as The Complexity Paradox or Tog’s Complexity Paradox) is an observation that products aiming to simplify a task for users tend to inspire new, more complex tasks.
Travel became simpler → more vacations now involve flying a plane and thus obtaining tickets online and thus comparison-shopping, aggregating reviews of faraway places, etc → omg, vacation travel is complex again. It just allows to fulfill more of a dream. — nine_k
Tog's paradox is the main reason why I suspect that generative AI will never destroy art, it will enhance it. It allows you to create artworks within minutes that until recently required hours to create and years to master. This will cause new art to emerge that pushes these new tools to the limit, again with years of study and mastery, and they will look like nothing we've been able to produce so far. — posix86
- The disposable web
- The Long Tail of AI - how non-AI companies are integrating artificial intelligence.
- The 70% problem: Hard truths about AI-assisted coding
- How AI-assisted coding will change software engineering: hard truths
- MODERN-DAY ORACLES or BULLSHIT MACHINES? - How to thrive in a ChatGPT world
- Deep dive into LLMs like ChatGPT by Andrej Karpathy (TL;DR)
- The Deep Research problem - OpenAI’s Deep Research is built for me, and I can’t use it.
LLMs are good at the things that computers are bad at, and bad at the things that computers are good at.
- The Law of Leaky Abstractions
So the abstractions save us time working, but they don’t save us time learning.
- ACM A.M. Turing Award Honors Two Researchers Who Led the Development of Cornerstone AI Technology - Andrew Barto and Richard Sutton Recognized as Pioneers of Reinforcement Learning
- The Bitter Lesson by Richard Sutton - http://www.incompleteideas.net/IncIdeas/BitterLesson.html
- AI researchers have often tried to build knowledge into their agents,
- this always helps in the short term, and is personally satisfying to the researcher, but
- in the long run it plateaus and even inhibits further progress, and
- breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.
- The Bitter Lesson by Richard Sutton - http://www.incompleteideas.net/IncIdeas/BitterLesson.html
- My LLM codegen workflow atm
- AI Blindspots
- Dijkstra On the foolishness of "natural language programming"
- The expectation for programming to be simplified through "natural language programming" is misguided; formal symbolism is essential for accuracy.
- Formal texts provide a structured framework that helps eliminate nonsensical statements, which are more prevalent in natural language.
- The Problem with “Vibe Coding”
- Vibe Coding is not an excuse for low-quality work - AI 기반 바이브 코딩은 혁신적이지만, 품질 없는 속도는 위험하다는 경고의 글
- 품질은 자동으로 따라오지 않음
- 속도는 품질 없이는 무의미함
- AI는 대체자가 아니라 인턴입니다 (사람이 루프 안에 있어야 함)
- Why LLM-Powered Programming is More Mech Suit Than Artificial Human - “Centaur chess” pairs humans with AI chess engines, creating teams that outperform both solo humans and solo AI systems playing on their own.
- AI Horseless Carriages
Their app is a little bit of AI jammed into an interface designed for mundane human labor rather than an interface designed for automating mundane labor. “Horseless carriage” refers to the early motor car designs that borrowed heavily from the horse-drawn carriages that preceded them.
- My new hobby: watching AI slowly drive Microsoft employees insane
- The Copilot Delusion
Management has an AI shaped hammer and they're hitting everything to see if it's a nail. — #
This--all of this--seems exactly antithetical to computing/development/design/"engineering"/architecture/whatever-the-hell people call this profession as I understood it.
Typically, I labored under the delusion that competent technical decision makers would integrate tooling or choose to use a language, "service", platform, whatever, if they saw benefits and if they could make a "case" for why something was the correct approach, i.e how it met some product's needs, addressed some shortcomings, made things more efficient. — #My point of comparison of choice is overseas contractors, not pair programming.
Copilot or Cursor or whatnot is basically a better experience because you do not have to get on Zoom calls (after Slack has failed) to ask why some chunk of your system that cares about root nodes has mysteriously gained a function called isChild (not hasChildren) that returns a boolean based on whether or not the node has children and not whether it has a parent. Or to figure out why a bunch a API parameters that used to accept arrays now don't. — # - The Recurring Cycle of 'Developer Replacement' Hype
The pattern is becoming clear once again: the technology doesn't replace the skill, it elevates it to a higher level of abstraction. For agency work building disposable marketing sites, this doesn't matter. For systems that need to evolve over years, it's catastrophic.
- The biggest asset of a developer is saying "no" to people. — #
- The rise of judgement over technical skill
AI is democratizing a wide range of creative and professional tasks.
The key differentiator is no longer technical skill but judgement. - A look at CloudFlare’s AI-coded OAuth library
It’s not bad, but I wouldn’t really recommend it for use yet.
- Why Generative AI Coding Tools and Agents Do Not Work For Me
- reviewing AI-generated code takes as much time as writing code from scratch, if not more.
- interns learn and improve over time, while AI does not retain knowledge from past tasks.
- Claims that AI increases speed or productivity often come from lowering quality standards while accepting additional risks, or stem from the interests of AI vendors.
ETC
nerdponx
This has been my complaint about AI from the beginning, and it hasn't gotten better. In the time spend I figuring out how to explain to the AI what I need it to do, I can just sit down and figure it out. AI, for me, has never been a useful assistant for writing code. Where AI has really improved my productivity is in acting like a colleague I can talk to. "I'm going to start working on X: how would you approach it? what should I know that isn't obvious from the beginning?" or "I am thinking about using Y approach for X problem but I don't see a lot of literature about it. Is there a reason it's not more common?".
These "chats" only take 10-30 minutes and have already led me to learn a bunch of new things, and helps keep me moving on projects where in the past I'd have spent 2-3x as long working through ideas, searching for literature, and figuring things out.
Show HN: I AI-coded a tower defense game and documented the whole process
vibe coding 성공 사례"에 관한 글들은, 마치 다수의 에이전트와 복잡한 코드 오케스트레이션, LLM으로 생성된 룰만 갖추면 “시간을 되감는 타워 디펜스 게임을 만들고 결함도 버그도 없게”라는 프롬프트 한 줄로 게임이 뚝딱 만들어진다는 식의 환상을 주는 경우가 많음 하지만, 실제 프로젝트에 사용된 프롬프트들은 AI 코딩에 가장 잘 먹히는 방식과 일치함 명확하고 꼼꼼한 아이디어를 수백 개의 작은 문제로 쪼개고, 정말 중요한 부분에는 구체적인 아키텍처적 가이드를 주는 방식이 효과적임
기술 리드와 프로덕트 오너 역할을 병행하는 입장에서, 이 방식이 인간과 일할 때도 정석임 내 일의 70%는 임원의 “타임 트래블 타워 게임, 버그 없이”라는 추상적 요구를, 팀이 높은 추상화를 유지하면서 서로 겹치지 않게 작업할 수 있도록, 강력한 아키텍처 비전이 맥락에 담긴 일련의 프롬프트로 바꾸는 일임
AI 코딩에서 잘 통하는 내 방식은, 기본 기능이나 게임플레이 뼈대를 AI로 ‘원샷’으로 만든 뒤, 그 위에 여러 번 반복적으로 쌓아가는 것임 원샷 결과가 바로 인상적이지 않으면, 바로 다른 프롬프트로 보완해서 괜찮은 결과가 나올 때까지 재시도해 기반을 마련함
AI는 크게 세 가지 역할을 함 (1) 학습 도구 - 내가 용어를 몰라도 질문 의도를 잘 파악해서 시작점을 잡아주고, ‘내가 몰랐던 사실’까지 알려주기에 제일 중요한 역할임 (2) 반복적이거나 지루한 일 처리 - 코드 주석, 설정 파일 작성 등은 내가 할 수도 있지만 속도를 늦추는 일들을 무난하게 처리함 (3) 검색 - (1)번처럼 실제로 내가 원하는 게 뭔지 AI가 파악해 필터링이나 추천 등을 맡김 AI에게 “생각”을 맡길 수도 있지만 그럴 필요는 없음 인간보다 똑똑하지 않고 단지 더 빠르고 더 많은 걸 아는 FPU 같은 존재임
회의론의 이유는 현재 AI 솔루션이 “팔리고 있는 모습”과 “실제 하는 일” 사이의 간극 때문임 모든 AI 솔루션, 특히 에이전트는 숙련된 사람의 가이드 없이는 쓸모 없는 결과만 냄 실제로 “자율적”인 요소는 거의 없음 ‘vibe coding’이라는 용어를 만든 사람조차, 업계가 순서를 거꾸로 밟고 있다고 얘기함 이런 툴이 환상적이긴 하지만 반드시 강하게 통제해야 한다는 점을 빼놓는 건 사실상 거짓말임
10x 레버리지에 대한 예시로, 언어를 들 수 있음 예전에는 Lisp 등이 더 많은 일을 더 빠르게 해준다고 했는데, 이제는 실제로 작성해야 할 코드는 줄이면서도 결과물은 빠르고 고성능 언어로 생성할 수 있음 단, 생성된 코드 중 쉽게 검증되지 않는 부분을 충분히 검토해야 한다는 ‘덫’이 있음 고도 표현력을 가진 언어 덕분에 사전 플랜 없는 사람들이 난장판 코드베이스를 양산하기도 했던 것처럼, AI 도구로도 이런 일이 반복될 항목 하지만 내가 진짜 시간 절약하는 부분은, 완전히 새 코드를 짜는 것보다 옛 코드와 신 코드를 통합하거나 개선할 때임 디버깅에서 큰 도약이 일어남 예전처럼 print만 찍지 않고, 코드를 복사해 붙여넣기만 해도 “출력이 이렇지 않고 저렇다는 데 왜 그렇냐?”고 AI에게 물어 빠르게 원인과 대안을 얻을 수 있음 특히 SQL, IaC, 빌드 스크립트 등 디버거 붙이기 어려운 작업에서 이런 방식이 엄청나게 큰 장점임
Children