NLP——LLM-API调用示例


Qwen

  • Qwen API 申请:获取API Key
  • Qwen API 调用文档:Qwen-API Doc
  • 吐槽:Qwen 的文档和申请链接写的很差,阿里云东西太多,需要翻来翻去找
  • Qwen API 调用示例:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    def qwen_api():
    import requests

    url = "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions"
    headers = {
    "Authorization": "Bearer $API_KEY",
    "Content-Type": "application/json"
    }

    data = {
    "model": "qwen-plus",
    "messages": [
    {"role": "user", "content": "你好,请介绍一下你自己"}
    ],
    "max_tokens": 50,
    "temperature": 0.0, # 贪心采样示例
    "top": 0.2, # 贪心采样示例
    "logprobs": True, # 可以打开 logprobs 看每个 token 的 logprobs,使用 e^logprob 即可得到最终概率
    "top_logprobs": 2,
    }

    response = requests.post(url, headers=headers, json=data)
    print(response.json())

    if __name__ == "__main__":
    qwen_api()

LongCat

  • LongCat 文档:LongCat API开放平台快速开始

  • 文档写的清晰明了,Qwen 应该学习一下

  • LongCat API 调用示例:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    def longcat_api():
    import requests

    url = "https://api.longcat.chat/openai/v1/chat/completions"
    headers = {
    "Authorization": "Bearer $API_KEY",
    "Content-Type": "application/json"
    }

    data = {
    "model": "LongCat-Flash-Chat",
    "messages": [
    {"role": "user", "content": "你好,请介绍一下自己"}
    ],
    "max_tokens": 1000,
    "temperature": 0.7,
    # "logprobs": True, # 打开这个参数会报错
    }

    response = requests.post(url, headers=headers, json=data)
    print(response.json())

    if __name__ == "__main__":
    longcat_api()
    • 特别强调:目前 LongCat 不支持返回 logprobs 信息