GitHub Copilot integration?

I will admin I am starting to rely on GitHub Copilot more and more in my day to day use of Visual Studio and Visual Studio Code. I would love some integration with Linqpad if possible. I searched the forms and was surprised to see it not mentioned yet.



  • I totally agree.

    I found these that might help with the implementation.


    Another way is to just use the codex model from OpenAi. You can just let your customers submit their own API keys.

  • Thanks - will look into that.

  • You can open linq files inside vscode and it will recognize ( in most cases ) the language from the code in the file. If it cannot then you can set the vscode file language mode of a linq file to c#, f# or vb.net as needed.

    If you code in one language then you can configure file association for linq files to c#, f# or vb instead of the default plaintext.

    From what I can tell Copilot in vscode work independently of any language service/IDE tooling. It just needs to know the programming language in order to do its magic. I believe the default is inferred from the file type, but you can tell the API underneath what language to use directly.

    You can use the reference cs file feature of LINQPad and write your Copilot code in that cs file then reference that inside a linq file.

    #load "..\..\source\myutil\copilot-coded-file-here.cs"

    I use Studio and Code for Copilot and other features not supported in LINQPad IDE and it all works seamlessly for me using these techniques. Because linq files are plaintext you can build tools around them.

    Another workflow:

    I have a console project that has a linq file that references my cs files and when you build/debug it runs LINQPad IDE with flags to run and hide the editor ( "F:\LINQPad7-x64.exe" "F:\linqpad-file.linq" -run -hideeditor ). In this case, I use Studio or Code for all the coding and use LINQPad only to interact with the results panel. I never interact with the code editor inside LINQPad.

    Not perfect but works.

  • Using the Codex model from OpenAI is the simplest approach, and would enable some interesting features such as a list of customized prompts (think: "// Bugs in the following code:" or "// A more efficient way to write this:" or "// Outstanding thread-safety issues"), as well as allowing users to tweak AI parameters. Obtaining an API key also seems fairly straightforward.

    From the analysis on GitHub Copilot, it seems that a fair bit of complexity arises in deciding when to initiate a query. It's also imperfect in that it results in false positives and false negatives. False positives equate to greater cost (ultimately borne either by higher subscription fees or higher token usage); false negatives mean waiting for something to appear when nothing does. How would you feel about a key such as Shift+Space to perform an explicit AI call? It would respond immediately with "Processing...", followed by suggestion(s), with a subsequent alpha-key triggering a customized prompt injection.

  • Copilot often assumes you're writing programs, not expression or statement-style LINQPad queries ( which makes sense ). It doesn't understand newer c# lanuage features ( I don't either ) as well. Some things don't make sense in LINQPad. I wonder if it'll learn over time but for now, the best results for me are LINQPad Programs using simple and full syntax no fantasy language or shorthand syntax. Also, Code seems to work better and faster than Studio in general.

    LINQPad can simplify a lot, but the code Copilot writes is often longer and verbose. Now that I think about it more an integration that understands LINQPad would be LINQPad Copilot, not Github Copilot. The key would be the code assistant understanding LINQPad.

    Chatgpt style prompt-based integration is something I'm focusing more on.

    Chatgpt understands LINQPad and is more accurate in understanding my intent. Different tech and interface but some combination of interactive code assistance Q & A ( ChatGPT ), and in-editor code assistance ( Copilot ) "like" features for LINQPad is more interesting to me.

  • FYI:

    After turning off all Visual Studio IntelliCode settings, Copilot now works much better. IntelliCode should be disabled while Copilot is active but I was experiencing collisions between them.

  • FYI:
    IntelliCode can train on your solution see image.


    Now Copilot and IntelliCode work as expected in my solutions with LINQPad.

  • I think hitting a shortcut would work! My process (in vscode) is normally that I write a comment of what I want to do:

  • This works with gpt-3.5-turbo, with my prototype prompts.

    Stay tuned. Good things are coming!

  • Try the latest beta:

    Shift+Space to AI-complete.

    Let me know your thoughts.

  • Some notes so far:

    1. It can be slow 2-3 out of 5 times each word/token being typed can take about 3-4+ seconds.

    2. Switching queries while it writes code seems to break the completion.

    3. An error message that was triggered during a completion that failed.

    4. Does it understand expression and statement modes? I get long hangs ("Working...")

    5. Too much code generated for the comment "create if statement c#", see images. ChatGPT was correct.

      It is generating every if statement combination?
      Another really long and wrong result. It appear to be generating/copying similar or related code? I wrote this "'create a string field". It created a field but what is the rest? I see another pattern in what is being generated.

      ChatGPT is right again.

    6. Changing editor focus kills "TAB to accept" if you click inside editor again.

    7. I have files with thousands of lines sending it to OpenAI and back can be related to the slowest. They also have a limit "max request" to the amount of tokens sent to them. I need to look into this.

    8. LINQPad sends all the query contents to OpenAI? A 2,000 line file? If so I need not use it on big files. Do you perform any processing before it is sent?

    Tech like this can be hard to find the bottleneck. Is it the editor, OpenAI servers/data, using the wrong model, prompt, slow internet etc.?

    Can you add the ability to wait for the completion code to finish then write the result at once Instead of the type-writer like method.

    Or allow us to dump the response to a panel or inside the editor.

    I made a tool based on gpt-3.5-turbo with a readline for the prompt and the output panel for the result. It is 10x faster without code context. You can see in my image the response is not source code, it is a numbered list of customers.

    When you add "in c#" to the request you'll get the result with c# code.

    LINQPad use Davinci which explains the different results.

    I need to spend time understanding the differences between each model.

    Here is the troubleshooting help from Copilot. It is useful in understand what can cause issues including LINQPad since it is built on top of the same tech.


  • I think multiple solutions to a prompt is what was being generated all at once in those cases I mentioned above. There are many ways to write an if statement so it returns a list of possible solutions. vs and vscode both have custom UI that present multiple "Synthesized Solutions".

    see this example from vs:

    see solutions 1/10 you select the one you want.

    see this example from vs code:


    I only glanced over the docs, so much to learn.

  • Cool, will try this out!

  • This seemed to timeout or abort. And the line numbers are from 9 to 11 with large gap.

    Copied the comment.

    A lot of weird things like this not sure what's causing it, but documenting it.

  • Starting to get this error more frequently.

  • This is new. It gave me results in f# and vb in a c# file.

  • I'm getting much better results in empty files.

  • When multiple solutions are found you can just hit Tab to prevent it from generating more solutions.

  • As of right now, Invoking (Shift+Space) from signatures or on the same line as a statement is more reliable overall than writing from comments.

    So roughly:
    from signatures and statements - 80% success ( depends on code patterns and what code you have in your file and it's language )
    from comments - 50% success ( depends on what exactly you ask it to do in your comment )

  • Some notes:

    • The gpt-3.5-turbo model is faster (and cheaper) than Davinci, but it's not easy to generate reliable prompts for code completion. Every failure case can be fixed by adding more instructions or another example, but at the cost of increasing the chance of other failures. It's a space that's changing rapidly, so we may see a newer Codex model before long. There are plans for other features that will use gpt-3.5-turbo, but for actual code completion, a model that's been designed for the purpose is probably the best bet.

    • Some of the problems related to completing code after comments appear to be worse with top-level statements - Davinci appears to have some trouble here (especially knowing when to stop - I've observed the same issue in Copilot) although the issue is not unique to top-level statements. It can sometimes help if you write a comment that might realistically appear in code rather than one contrived for the sake of completion. For example:

    string s = "This is a test";
    // Split s into words
    • OpenAI server performance seems to vary day-by-day, and throughout the day (depending on the loading on the servers?) I imagine this should improve over time.

    • LINQPad limits the text sent to OpenAI to a few thousand characters. This should yield good performance unless the servers are overloaded. There's a trade-off here; too little context and the completions are less useful.

    • Regarding giving answers in F# and VB, try removing "in C#" from your comment. It's already been prompted to use C#, and adding "in C#" might trigger it to gain inspiration from samples that demonstrate the same concept in a range of languages.

    • Regarding adding the ability to wait for the completion code to finish before writing the result, what would be the benefit? You'll miss out on an early opportunity to cancel or truncate the completion if it's not useful.

    • Regarding offering multiple completion options, I'm looking into the viability of this.

  • I had cases where I didn't include "in c#" and it gave the solution in all three languages.
    When It fail, I try many variations and collect the best prompts to make it easier to track what worked.

    AI Snippets:

    Your prompt/comment would represent a code snippet file.
    Invoke shift+space and it would return all the code, same as file snippets.
    You could save AI snippets to file.

    Speed: ( this depends on model used and other factors as you mentioned )

    It can be faster to get the whole response first then push to the editor, save to file, dump it or whatever.

    Playground Workflows:

    You want to interact with generated output before committing to it.
    Many other operations and tasks that is related to your work but you may not want it to stay or go in-editor.

    Code Conversion:

    namespace ConsoleApp1
        internal class Program
            static void Main(string[] args)
                // convert
                //'Dim salmons As New List(Of String) From{"chinook", "coho", "pink", "sockeye"}
                //'For index = 0 To salmons.Count - 1
                // 'Console.Write(salmons(index) & " ")
                // to c#
                List<string> salmons = new List<string> { "chinook", "coho", "pink", "sockeye" };
                for (int index = 0; index <= salmons.Count - 1; index++)
                    Console.Write(salmons[index] + " ");

    Converting code is a task that can be immediately dumped to a panel. Because the output is temporary It would be useful to dump to panels for review.

    My example below immediately writes the response to the results panel. I continue working while it runs on another tab. I then save an html page for record.

  • It is already possible to create snippets with code + comments included for reuse.

    Not exactly what I want but can be useful.

  • Nice! Works like a charm.

  • Update: LINQPad now uses GPT-3.5-Turbo instead of Codex, due to the latter's impending doom.

    GPT-3.5-Turbo is more modern and (considerably) cheaper than Codex, however its natural language focus means that code completion tasks will inevitably output English at times instead of code (and the chance of repeating text is somewhat higher). Let me know if you come any glitches - I've already written a dozen or so transforms to minimize the issues, but there are bound to be more.

  • GPT-3.5-Turbo is faster and much reliably great change. I always felt Codex was unusable outside of Copilot's use of it. It seems like Copilot is doing a lot under the hood, beyond Codex. Also Codex was much better with Python, and Javascript compared to .net languages.

    All my previous issues are solved by the gpt model.

    I would really like the ability to either send to new file or send to results panel with Tab-Ctrl for code conversions. It would save time if it was built-in and can open up LINQPad.

    See this example conversion.

    I speak VB and is always converting to and from C#, F# and VB. Tab-Ctrl to send completion to a new query or output panel would be great for exploration and other use cases.

    Or add basic functions to the LINQPad public API for plugins to allow for text editor scripting? Is this already possible?

    Maybe in working and running code:

    // In a running querying dump the prompt and the completion to a panel.
    // In a running querying dump the completion to a new file.
    // In a running querying dump the completion to a panel.
    //AI Completion object.

    or editor flags:

       // code here will only be sent to the AI service
       //code to convert F# to C#

    -aiquery : compiler flag to tell LINQPad to send the code through the AI service instead of running it.

    Just some ideas to make use of GPT in LINQPad beyond the code editor.

Sign In or Register to comment.