Deploy LLM with AidGen
Introduction
Edge deployment of Large Language Models (LLMs) refers to the process of compressing, quantizing, and deploying models that originally ran in the cloud onto local devices. This enables offline, low-latency natural language understanding and generation. Based on the AidGen inference engine, this chapter demonstrates the deployment, loading, and conversation workflow of LLMs on edge devices.
In this case, the LLM inference runs on the device side. C++ code is used to call relevant interfaces to receive user input and return conversation results in real-time.
- Device: IQ9075
- System: Ubuntu 24.04
- Model: Qwen2.5-0.5B-Instruct
Supported Platforms
| Platform | Operation Mode |
|---|---|
| IQ9075 | Ubuntu 24.04 |
Prerequisites
- IQ9075 Hardware
- Ubuntu 24.04 System
- Model Files: Visit Model Farm: Qwen2.5-0.5B-Instruct to download the model resource files.
💡Note
Select the QCS8550 chip option.
System Dependency Configuration
Configure AidLux Dependency Sources
# Download the correct public key
sudo wget -O- https://archive.aidlux.com/ubuntu24/public.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/private-aidlux.gpg > /dev/null
# Edit the source file
sudo vim /etc/apt/sources.list.d/private-aidlux.list
# Fill in the private key provided by AidLux in the source file
deb [arch=arm64 signed-by=/etc/apt/trusted.gpg.d/private-aidlux.gpg] https://archive.aidlux.com/ubuntu24 noble main
# Update cache
sudo apt updateOnce the update is complete, you can obtain the official AidLux SDK dependencies using the following command:
sudo apt list | grep aid | grep unknown# Install software
# Essential dependencies (not included in system by default)
sudo apt install python3 python3-pip libopencv-dev python3-opencv net-tools
# Must be installed before aidlite
sudo apt install aidlux-aistack-base aidrtcm
# Install aidlite and dependencies
sudo apt install aid-lms aidlms-sdk aidlite-sdk cmake
sudo apt-get install libfmt-dev nlohmann-json3-dev
sudo apt install aidlite-*
# DSP Support
sudo apt-get install qcom-fastrpc1
sudo apt-get install qcom-fastrpc-dev
# Install aidgen-sdk
sudo apt install aidgen-sdk
# Install mms service
sudo apt install aid-mms
# GPU Support
sudo add-apt-repository ppa:ubuntu-qcom-iot/qcom-noble-ppa
sudo apt install qcom-adreno-cl1
sudo ln -s /usr/lib/aarch64-linux-gnu/libOpenCL.so.1 /usr/lib/aarch64-linux-gnu/libOpenCL.soAfter installation, verify that the aidlite and aidgen directories have been added to /usr/local/share.

Device Authorization
Obtain Device SN Code
cat /sys/devices/soc0/serial_numberObtain Authorization File
Provide the SN to AidLux technical personnel to generate a device-specific License file, and place it in the directory /etc/opt/aidlux/license/AidLuxLics.
Deployment
Step 1: Copy AidGen SDK Code Examples
# Copy test code
cd /home/ubuntu
cp -r /usr/local/share/aidgen/examples/cpp/aidllm .Step 2: Upload & Extract Model Resources
- Upload the downloaded model resources to the edge device.
- Extract the resources to the
/home/ubuntu/aidllmdirectory.
cd /home/ubuntu/aidllm
unzip Qwen2.5-0.5B-Instruct_Qualcomm\ QCS8550_QNN2.29_W4A16.zip -d .Step 3: Verify Resource Files
The file structure should be as follows:
/home/ubuntu/aidllm
├── CMakeLists.txt
├── test_prompt_abort.cpp
├── test_prompt_serial.cpp
├── aidgen_chat_template.txt
├── chat.txt
├── htp_backend_ext_config.json
├── qwen2.5-0.5b-instruct-htp.json
├── qwen2.5-0.5b-instruct-tokenizer.json
├── qwen2.5-0.5b-instruct_qnn229_qcs8550_4096_1_of_2.serialized.bin
├── qwen2.5-0.5b-instruct_qnn229_qcs8550_4096_2_of_2.serialized.binStep 4: Configure Conversation Template
💡Note
Refer to the aidgen_chat_template.txt file in the model resource package for the correct template.
Modify the test_prompt_serial.cpp file based on the model template:
if(prompt_template_type == "qwen2"){
prompt_template = "<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n{0}<|im_end|>\n<|im_start|>assistant\n";
}Step 5: Compile and Run
# Install dependencies
sudo apt update
sudo apt install libfmt-dev
# Compile
mkdir build && cd build
cmake .. && make
# Run after successful compilation
# Parameter 1: Enable profiler statistics (1 = on)
# Parameter 2: Number of inference loops
mv test_prompt_serial /home/ubuntu/aidllm/
cd /home/ubuntu/aidllm/
./test_prompt_serial qwen2.5-0.5b-instruct-htp.json 1 1- Enter your text in the terminal to start the conversation.
