Skip to content

Releases: ittia-research/check

v.0.0.3

09 Sep 10:40
dcac942
Compare
Choose a tag to compare

What's Changed

  • lint in #22
  • Add JSON return format support, add PyPI package for API connect, return all citations instead of the winning one only in #25
  • Improve exception handle

Full Changelog: v.0.0.2...v.0.0.3

v.0.0.2

02 Sep 14:53
c11d941
Compare
Choose a tag to compare

Highlight

  • Fully integrated with DSPy with MIPROv2 optimizer
  • Changed default embedding and rerank inference to Infinity API server, faster and more stable
  • Fixed CloudFlare 524 timeout error
  • Changed search backend to https://search.ittia.net
  • Added examples on how to create wiki_dpr index and start a retriever server

What's Changed

  • Add DSPy pipeline in #7
  • add api key support to LlamaIndex OllamaEmbedding in #9
  • change all LLM calling to DSPy, increase citation token limit in #10
  • change base image to CUDA, change to dspy.Retrieve in #11
  • add script to create dataset based on HotPotQA, update infra in #12
  • add wiki_dpr retriever for DSPy compile in #15
  • change to multi-sources mode in #17
  • change API response to stream in #18
  • move pipelines to one single class, change to streaming search backend in #19
  • change default search backend, update endpoint in #20

v.0.0.1

15 Aug 09:53
Compare
Choose a tag to compare

Features:

  • For each statement, generate multiple verdicts against every web search page.
  • Weight verdicts into one final and show related context and sources.
  • Retrieval: LlamaIndex auto merging retriever, embedding and rerank.

What's next:

  • More sophisticated pipeline using DSPy etc.
  • Implement LLM features: multi-shot, chain of thought, etc.