Software Engineering
Collections
-
Bulletproof React - A simple, scalable, and powerful architecture for building production ready React applications.
-
Make It Work Make It Right Make It Fast is an assertion that if you can "make it right", you'll be able to "make it fast" later.
-
Software engineering exists as a discipline because you cannot EVER under any circumstances TRUST CODE.
- You get the LLM to draft some code for you that’s 80% complete/correct.
- You tweak the last 20% by hand.
Vanclief is hesitant about 5 times as productive because we only need to "check the code is good" for two main reasons:
- It is my belief that if you are proficient enough in the task at hand, it is actually a distraction to be checking "someone else code" over just writing it yourself. When I wrote the code, I know it by heart and I know what it does (or is supposed to do). At least for me, having to be creating prompts and then reviewing the code that generates is slower and takes me out of the flow. It is also more exhausting than just writing the thing myself.
- I am only able to check the correctness of the code, if am am proficient enough as a programmer (and possibly in the language as well). To become proficient I need to write a lot of code, but the more I use LLMs, the less repetitions I get in. So in a way it feels like LLMs are going to make you a "worse" programmer by doing the work for you.
-
How Instagram scaled to 14 million users with only 3 engineers
- Keep things very simple.
- Don’t re-invent the wheel.
- Use proven, solid technologies when possible.
-
Locality of Behavior in React Components
Abstractions are not an enemy of locality. You don’t need to inline everything in a single file. In the context of a React component, the invocation of the function is more important than what it actually does.
-
- Bloom filters return true it doesn't mean "yes", it means "maybe", false-positive.
- If you're happy to accept being wrong 0.0001% of the time (1 in a million), you could use a bloom filter which can store the same data in 82% reduction in size.
-
Three Laws of Software Complexity
- A well-designed system will degrade into a badly designed system over time.
- Complexity is a Moat (filled by Leaky Abstractions).
- There is no fundamental upper limit on Software Complexity.
You Can't Buy Integration
Don't Call Yourself A Programmer, And Other Career Advice
- 90% of programming jobs are in creating Line of Business software.
- Engineers are hired to create business value, not to program things.
- Add revenue. Reduce costs. Those are your only goals.
Eight tips to Write Functions like a Senior Developer
- Do one thing and do it well
- Never use flag arguments
- Prefer exceptions over error codes
- Make separation between command and query
Architecture
- Domain-centric Architectures (Clean and Hexagonal) for Dummies
- BBC Online Uses Serverless to Scale Extremely Fast
- Islands Architecture
- HN: Modules, not microservices
- I just want to point out that for the second problem (scalability of CPU/memory/io), microservices almost always make things worse.
- I was working at Amazon when they started transitioning from monolith to microservices, and the big win there was locality of data and caching.
- Microservices are less efficient, but are still more scalable.
- I am working on a project that uses a microservice architecture to make the individual components scalable and separate the concerns. However one of the unexpected consequences is that we are now doing a lot of network calls between these microservices, and this has actually become the main speed bottleneck for our program, especially since some of these services are not even in the same data center. We are now attempting to solve this with caches and doing batch requests, but all of this created additional overhead that could have all been avoided by not using microservices.
This experience has strongly impacted my view of microservices and for all personal projects I will develop in the future I will stick with a monolith until much later instead of starting with microservices.
- Kernighan and Pike were right: Do one thing, and do it well
- It's not microservice or monolith; it's cognitive load you need to understand first
- “Instead of choosing between a monolithic architecture or a microservices architecture, design the software to fit the maximum team cognitive load”
- If you have only one team, consider adjusting your architecture to match the team’s capacity. Favour monolithic, cohesive and modular architectures.
- If you have multiple teams, consider doing microservices or similar type of architectures so they can work independently.
- The types of communication boundaries change significantly between single and multiple team architectures. Single teams are optimized to communicate via the codebase, documentation, discussions and design meetings. Multiple teams are better optimized to communicate via well-designed APIs (or libraries) that abstract the complexities of their domains.
Thread-per-core
The thread-per-core architecture for Rust async programs has been controversial. While it promises better performance and ease of implementation, it may only achieve one, not both. A share-nothing approach keeps data in separate core caches but is complex to implement transactionally. Research showed this approach reduced tail latencies over a shared approach. However, the experiments did not test dynamic work imbalances that could appear in practice. Work-stealing may help address imbalances while still keeping some data pinned to cores, achieving both performance and utilization benefits. The debate focuses on balancing work-stealing with shared state rather than ease of implementation claims.
Algorithms
Hexagonal Grids
This guide discusses different approaches to representing hexagonal grids in code, including cube, axial, offset, and doubled coordinates.
- Each system has tradeoffs in terms of simplicity for algorithms and storage.
- Axial coordinates are recommended for algorithms as they allow basic math operations.
- Offset coordinates may be better for storage.
Approximate timing for various operations on a typical PC
execute typical instruction | 1/1,000,000,000 sec = 1 nanosec |
fetch from L1 cache memory | 0.5 nanosec |
branch misprediction | 5 nanosec |
fetch from L2 cache memory | 7 nanosec |
Mutex lock/unlock | 25 nanosec |
fetch from main memory | 100 nanosec |
send 2K bytes over 1Gbps network | 20,000 nanosec |
read 1MB sequentially from memory | 250,000 nanosec |
fetch from new disk location (seek) | 8,000,000 nanosec |
read 1MB sequentially from disk | 20,000,000 nanosec |
send packet US to Europe and back | 150 milliseconds = 150,000,000 nanosec |
Crypto
Children
- 1x Programming
- Base64 Encoding, Explained
- Clean Architecture on Frontend
- Fast-Paced Multiplayer (Part II): Client-Side Prediction and Server Reconciliation
- Fast-Paced Multiplayer (Part III): Entity Interpolation
- Fast-Paced Multiplayer (Part IV): Lag Compensation
- Feature Sliced Design
- How Pinterest scaled to 11 million users with only 6 engineers
- In defense of simple architectures
- L8 Explains Career of Software Engineers
- Master the Art of Caching for System Design Interviews: A Complete Guide
- PIDs: Creating Stable Control in Games
- Programming as Theory Building
- Refactoring
- Services By Lifecycle
- The Entity Service Antipattern
- Tidy First? Kent Beck on Refactoring
- What Sets an Exceptional Programmer Apart From an Ordinary Programmer
- What the heck is the edge anyway?