The research team from Peking University has released ChatLaw, an open-source large model focused on legal knowledge. To build the dialogue database, the team collected a substantial amount of raw texts, including legal news, forum posts, statutes, judicial interpretations, legal consultations, Chinese Judicial Examination tests, and court judgments. ChatLaw currently offers three versions: ChatLaw-13B, ChatLaw-33B, and ChatLaw-Text2Vec. ChatLaw-13B is based on the Ziya-LLaMA-13B-v1, an open-source large model from the Cognitive Computing and Natural Language Center of the International Digital Economy Academy (IDEA). While it performs well in various Chinese language tasks, it has limitations in answering complex legal questions. ChatLaw-33B, trained on the Anima-33B large language model, excels at logical reasoning but struggles with answering questions in Chinese due to limited Chinese corpus availability. ChatLaw-Text2Vec is a similarity-matching model trained on Google’s language representation model BERT with a dataset containing 930,000 judgment cases, to match user queries with corresponding legal provisions. According to AIGC OPEN, a Chinese news site, ChatLaw now supports multi-turn dialogue interactions and demonstrates more professional and specialized legal performance compared to similar products. However, currently, it does not include professional legal consulting functionalities. [AIGC OPEN, in Chinese]