I am trying to understand the concept of adapter-tuning, prompt-tuning, and prefix-tuning in the context of few-shot learning.
It appears to me I can apply prompt-tuning to a black box language model.
I read for prompt tuning the entire pre-trained language mo开发者_运维问答del is frozen. If thats the case prompt tuning could be applied for a OpenAI model like gpt-3 and Codex.
How could I do prompt tuning with OpenAI Codex?
Can anyone please guide me to the correct direction?
 
         
                                         
                                         
                                         
                                        ![Interactive visualization of a graph in python [closed]](https://www.devze.com/res/2023/04-10/09/92d32fe8c0d22fb96bd6f6e8b7d1f457.gif) 
                                         
                                         
                                         
                                         加载中,请稍侯......
 加载中,请稍侯......
      
精彩评论