Skip to content

tienpm/hip_llama.cpp

Repository files navigation

hip_llama.cpp

Inference llama2 model on the AMD GPUs system

Getting Started

  • Install the dependencies.
  • Clone the repository.
  • Run the project.
cd hip_llama.cpp
make

Usage

  • Instructions on how to use the project.
./build/apps/llama model.bin -m test -f <input_filename> -o <output_filename>
  • Examples of how to inference llama2.
./build/apps/llama /shared/erc/getpTA/main/modelbin/stories110M.bin -m test -f assets/in/gen_in_128.txt -o assets/out/gen_out_128.txt

Documentation

  • Not available yet.

Contributing

  • If you have some issues or feature requests, please open issues or let us know by email of contributors bellow.

License

GPL-3.0

Contributers

Full Name Email
Pham Manh Tien [email protected]
Nguyen Huy Hoang [email protected]
Nguyen Xuan Anh [email protected]

Acknowledgments

Reference: