Dwarves
Memo
Type ESC to close search bar

Forward Engineering Quarter 3, 2024

At Dwarves, technology is our passion. We create it, study it, test it, document it, make it open source, and always striving to enhance it for the benefit of all. Our goal is to promote software craftsmanship and drive innovation. In this issue of Forward Engineering, we’ll walk you through our experiments with new tech stacks, share insights on achieving engineering excellence, and reflect on key lessons from the tech market over the past three months. Unsurprisingly, AI has been a central theme in many of our discussions. We invite you to join us as we explore these discoveries and encourage you to freely contribute your own thoughts along the way.

Tech Radar

Dify

Adopt

Dify is an open-source platform that’s making waves by simplifying the development and orchestration of LLM (Large Language Model) applications. With its robust set of tools, developers can create intelligent workflows—from simple agents to sophisticated AI-driven apps—using a retrieval-augmented generation (RAG) engine. What’s impressive is how it makes AI workflow orchestration intuitive and accessible, even if you’re not a tech wizard. The drag-and-drop interfaces and clean UX/UI reduce the complexity of building LLM-based applications, enabling rapid prototyping and testing across multiple models.

We’ve been using Dify to quickly prototype product ideas by scaffolding agent workflows and testing their output with various models. It’s been a game-changer, especially in building workflow automations like a tech summarizer, memo chatbot, or report composer. Its simplicity and flexibility allow us to experiment with different models and agent workflows without getting bogged down by infrastructure concerns.

Our engineers have built and experimented with dozens of workflows on our self-hosted Dify server.

LangGraph

Assess

LangGraph is an emerging library designed for building stateful, multi-actor applications using large language models (LLMs). It facilitates the creation of agent and multi-agent workflows by leveraging a graph structure. Each node in the graph acts as an agent responsible for specific tasks, and interactions are managed through edges. This approach enhances productivity by letting developers focus on the specialized functions of each node without worrying about synchronizing inputs and outputs.

The key benefits? Improved visualization and management of complex interactions, division of tasks into manageable sub-problems, and high control over individual agents and their transitions. Despite its promising capabilities, LangGraph is still in its early days. Many techniques and designs are available on GitHub, but it requires further exploration and validation in diverse real-world applications.

RAG

Adopt

Retrieval-Augmented Generation (RAG) is enhancing AI by allowing it to access and utilize data it was never trained on. This makes it invaluable for companies needing to leverage their own data efficiently. Currently, it’s the most cost-effective way for organizations to integrate their proprietary information into AI models. By retrieving relevant documents or streaming the latest data, RAG enhances the contextual understanding of LLM-based applications, thereby improving their performance.

We mostly use RAG to enrich input contexts, whether by referencing static documents like PDFs or streaming real-time data from the internet. Its ability to integrate seamlessly with existing workflows and improve AI performance without extensive retraining makes it a practical choice. However, challenges like ensuring data quality and managing latency during retrieval need careful consideration. Alternatives like purely generative models lack the dynamic data access capabilities, making RAG a superior choice for many real-world applications.

LangSmith

Trial

LangSmith builds on the foundation laid by LangChain, which simplified the prototyping of LLM applications. But LangSmith shifts focus towards production, emphasizing reliability and maintainability. Its standout features include tracing agent workflows for easier debugging and automating testing with dataset creation and evaluators.

While its monitoring tools and tracing capabilities are beneficial for scaling and debugging, it’s still relatively new, and widespread adoption is still in progress. We’ve been trying out LangSmith in projects where robust production support is crucial. It’s got potential, but it needs further industry validation.

Cursor

Assess

Cursor is a fork of VS Code designed to enhance coding with AI while retaining a familiar text editing experience. What sets this IDE apart is its ability to register documents for reference, significantly boosting productivity by generating accurate and contextually aware code—especially when combined with Claude 3.5 Sonnet.

Our engineers have been testing it out, and the results are promising, particularly in creating templates and skeleton code. The impact is more noticeable at the unit level, making coding more enjoyable and reducing the mental load of syntax and specifics. Since it’s an emerging technology, we’ve placed Cursor in the “Assess” category due to its potential to revolutionize coding practices, despite being relatively new and requiring further exploration.

Devbox

Trial

Devbox is a tool designed to create isolated, reproducible development environments without the need for Docker containers or Nix language expertise. It simplifies onboarding by using a single devbox.json file to set up dependencies and environment configurations, avoiding the clutter of global environments.

Devbox addresses common issues like version conflicts across projects and the resource-intensive nature of Docker on Windows/Mac by leveraging native applications built with Nix. It significantly enhances battery life and system performance, but it does require some familiarity with Nix and can present file permission challenges. We’re giving Devbox a trial run, especially for teams seeking cleaner, more efficient development setups.

The journey of experimenting with Devbox is documented in our memo.

Shadcn/ui

Trial

Shadcn offers beautifully designed, accessible, and customizable UI components that you can easily copy and paste into your applications. This open-source tool enhances development speed by allowing developers to quickly scaffold UI components. In our recent projects, we saw significant time savings.

Initially, we had concerns about maintaining consistency with a copy-paste model, but our experience proved otherwise. Customization at the Tailwind config level ensures a cohesive theme, and the lightweight nature of the tool keeps applications fast to load and build. While it may lack the comprehensive ecosystem of full-set frameworks like MUI or Chakra, its modularity and potential AI-backed features with v0.dev position Shadcn as a compelling alternative.

Highlights on Memo

AI & LLM

History of Structured Outputs for LLMs

Why are structured outputs, like JSON, in LLM API endpoints so vital? We believe this will soon become a standard in all tooling.

Re-ranking in RAG

Sometimes, embeddings might not effectively extract the most accurate sources to enrich the context. Re-ranking offers an additional step to sift out the most relevant context for the initial query.

Design feedback mechanism for LLM applications

Capturing user feedback while they’re using the app is crucial for understanding the app’s performance and accuracy. This indispensable step precedes any further plans to improve app performance.

Multi-agent collaboration for task completion

We discuss the architecture and setup of the “divide and conquer” strategy to distribute workloads to multiple agents.

Journey of Thought Prompting: Harnessing AI to Craft Better Prompts

AI proves to be an excellent tool in crafting and improving system prompts, which are among the most important parts in maximizing any LLM benefits.

Further research

Golang

Golang Weekly Commentary Series

We’ve been diving into the Go Weekly commentaries, and here’s what we’ve found:

Go in Enterprise

We strongly advocate for Go due to its simplicity and performance. We believe the Go programming language should gain more popularity, especially in enterprise adoption. This belief prompted us to collect opinions and use cases from others on the subject:

Software Architecture & Modeling

GoF design pattern series

We’re big fans of foundational topics, and it’s interesting to revisit tried and true concepts. This time, we’ve chosen the Gang of Four Design Patterns. Many of the lessons are still significantly relevant to our current coding practices.

Design file sharing system

We discuss the design of a file-sharing system akin to Google Drive, where the path field for file hierarchies streamlines operations, offering fast, efficient storage and retrieval.

Designing a model with dynamic properties

Anyone who has used Notion is awed by the flexibility of its properties, which can be dynamically created and altered without hassle. We’ll reveal the structure of this flexibility from our experience in developing a very similar feature.

Local-first software

An overview of Local-First software, where data ownership shifts to the user, offering privacy and offline functionality, but facing technical challenges like CRDT complexity and secure synchronization.

Blockchain

Solana core concept

We dive into Solana’s unique architecture, separating program code from data and leveraging innovations like Proof of History (PoH) and Program Derived Addresses (PDAs).

Ton: Blockchain of blockchains

TON’s innovative model refines how decentralized applications and transactions can function on a massive scale.

Using Foundry for EVM smart contract development

Our thoughts on Foundry, a framework developed for creating EVM smart contracts. This unified toolchain leverages the speed of Rust for faster workflows and supports advanced features such as Solidity scripting and dependency management.

Market Report

Layoffs continue in tech world, Surge in August

August’s surge in job cuts reflects growing economic uncertainty and shifting market dynamics. According to the report:

The biggest growth in planned layoffs came in the technology field, with companies announcing 41,829 cuts, the most in 20 months.

Some of the companies announcing cuts include:

The increasing trend in layoffs year-over-year is concerning. It suggests that the tech industry’s job market instability is not a short-term phenomenon but potentially a longer-term restructuring. This could lead to a reimagining of workforce strategies in tech, possibly emphasizing more contract work or AI-augmented roles.

Source from https://layoffs.fyi

The State of Tech Market in 2024

The broader tech ecosystem, with JavaScript as a prominent example, is experiencing notable shifts in 2024. A tightening job market for software engineers has slowed career progression and increased competition, leading to several key trends across the industry:

The 2024 Stack Overflow survey highlights intriguing data:

AI Makes a Real Impact in Programming

AI is no longer just a buzzword; it’s making a tangible impact in programming.

As Amazon’s CEO, Andy Jassy, noted:

In under six months, we’ve been able to upgrade more than 50% of our production Java systems to modernized Java versions at a fraction of the usual time and effort. And, our developers shipped 79% of the auto-generated code reviews without any additional changes.

Our thought on that:

AI companies continue dominating YC batch

According to a Reddit post, in the current Y Combinator batch (S24 - Summer 2024), a staggering 72% of startups are focused on AI—a dramatic increase from just 1% in the winter of 2012 (W12). Compared to the crypto trend, AI’s momentum is exponentially greater.

Some key takeaways:

It’s evident that AI is not just a trend but a fundamental shift in how businesses operate. The rise of AI-focused startups indicates a significant transformation in the startup ecosystem.

References