Don’t Get Lost in the Code: A Beginner’s Guide to the 6 Essential Parts of Langchain Library

How to Train Your Ai
5 min readJan 27, 2024

--

If you’re looking to build smarter AI applications keep your eye on LangChain.

LangChain is a rising star in the LLM world that’s making it easier to build sophisticated applications with LLMs.

It provides a collection of building blocks and tools that make it easier to compose complex applications, context-aware applications, and reasoning capabilities. LangChain is still young, but its unique approach is making waves.

So, if you’re looking to build smarter AI applications that go beyond “Hello World,” keep your eye on LangChain — it might just be the missing piece you’ve been waiting for.

Building Smarter AI with LangChain: Beyond “Hello World”

Large language models (LLMs) like Bard and GPT-3 are powerful tools, but building sophisticated applications with them can feel like wrangling a firehose of text.

Enter LangChain, a rising star in the LLM world that’s making things simpler and smarter.

Think of LangChain as LEGO® for LLMs. Instead of clunky single calls, you can snap together pre-built blocks like data sources, reasoning modules, and even other LLMs.

This lets you build applications that truly understand and respond to complex tasks, not just generate generic text.

Photo by Mojahid Mottakin on Unsplash

Why is this important? Imagine an AI assistant that can:

  • Research and summarize complex topics for you, like writing a report on climate change.
  • Help you brainstorm creative ideas by combining different sources of inspiration.
  • Personalize your learning by adapting to your specific needs and interests.

LangChain’s 6 Key Components Work Together

Model:

This is the heart of LangChain, the LLM you’ll be utilizing. It could be Bard, GPT-3, or any other compatible model.

LangChain acts as a bridge between you and the model, allowing you to interact with it in a more structured and controlled way.

Models: Cohere, HuggingFaceHub, GPT-3, Jurassic-1 Jumbo, Megatron-Turing NLG, Bloom

Handle text generation, translation, and other language processing tasks.

from langchain.llms import Cohere as co
from langchain import HuggingFaceHub as hf
from langchain.llms import OpenAi as oa

llm = oa(model_name="text-ada-001")

llm = Cohere()

result = llm("Explain gravity with example")

In this example we used three LLM models: Cohere, HuggingFaceHub and OpenAi.

The most notable and publicly available ones LLM models:

Prompts:

Imagine prompts as instructions or guidelines for your LLM. They provide context and direction, shaping the output you receive.

LangChain allows you to craft fine-tuned prompts that consider the current conversation, user input, and desired outcome.

  • LangChain’s PromptTemplate class, Hugging Face’s templating system

Provide context and direction to LLMs, shaping their responses.

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

prompt = ChatPromptTemplate.from_template("Explain me like 5 yr kid {foo}")
model = ChatOpenAI()
chain = prompt | model
  • Prompting “Write a poem about a cat exploring a cyberpunk city” to generate a creative text piece within that specific context.

Chains:

Think of chains as sequences of actions or steps that LangChain takes to fulfill your request.

They combine various components like models, prompts, and data sources to achieve complex goals.

You can build custom chains or utilize pre-built ones for common tasks like question answering or summarization.

Tools: LangChain’s SequentialChain, SimpleSequentialChain, and ConversationalRetrievalChain

  • Define sequences of actions or steps to fulfill requests, combining models, prompts, and data sources.
from langchain.schema import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

prompt1 = ChatPromptTemplate.from_template("what is the city {person} is from?")

model = ChatOpenAI()

chain1 = prompt1 | model | StrOutputParser()

In this example, we are chaining prompt1 with model witch is ChatOpenAI.

Memory:

LangChain isn’t just about one-off interactions. It can remember past conversations and interactions through its memory component.

This allows for more context-aware responses and personalized experiences over time.

from langchain.memory import ChatMessageHistory

history = ChatMessageHistory()

history.add_user_message("hi!")

history.add_ai_message("How are you doing?")

In a Langchain Library we have ChatMessageHistory Class to retrieve the History and provide response accordingly.

Indexes:

These act as efficient organizers for your data sources, whether it’s internal documents, external APIs, or user-uploaded files.

LangChain uses indexes to quickly locate and retrieve relevant information that can be fed into the models and chains.

from langchain.document_loader import PyPDFLoader as pypdf

from langchain.vectorstores import Milvus

loader = pypdf("pdf_file_name.pdf")

db = Milvus.from_documents(docs, embeddings)

docs = db.similarity_search(query)
  • Tools: Vector databases (e.g., Pinecone, Weaviate, Jina), Elasticsearch, traditional document stores

Agent and Tools:

These are utility functions that provide additional functionalities within LangChain.

They can help with tasks like logging, monitoring, and debugging your applications, making the development process smoother and more efficient.

Example: A logging tool that tracks the interactions between the user and the LLM, allowing you to analyze user behavior and improve your application’s performance.

from langchain.tools import DuckDuckGoSearchRun
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

search = DuckDuckGoSearchRun()

template = """turn the following user input into a search query for a search engine:

{input}"""
prompt = ChatPromptTemplate.from_template(template)

model = ChatOpenAI()

chain = prompt | model | StrOutputParser() | search

chain.invoke({"input": "I'd like to figure out what games are tonight"})

LangChain’s logging and debugging utilities, external monitoring tools

Additional examples using specific tools:

  • Indexing scientific articles: Using Pinecone or Jina to create a vector database of research papers, enabling efficient retrieval for question answering or summarization tasks.
  • Building a conversational AI assistant: Combining Bard, a memory system, and a vector database to create an assistant that can remember past conversations, access relevant information, and generate personalized responses.
  • Summarizing research papers: Using a chain that involves Elasticsearch for document retrieval, GPT-3 for summarization, and a memory system to track user preferences and adjust summaries accordingly.

Conclusion:

By understanding and combining these building blocks, developers can unlock the true potential of LangChain and create sophisticated AI applications that go beyond simple text generation.

Remember, LangChain is still under development, but its modular approach and powerful capabilities are making it a rising star in the LLM world.

So, keep an eye on this innovative library and explore how it can help you build the next generation of AI applications!

--

--

How to Train Your Ai
How to Train Your Ai

Written by How to Train Your Ai

Ai Enthusiast | Save Water Activist | YouTuber | Lifestyle | Strategic Investments

No responses yet