Can we have an option to automatically compile after AI makes changes or generates code.

I must admit that I don't seem to have as much success with AI coding as others appear to have and there have been a number of times where I have started to review the results and discovered that the code does not even compile - which is frustrating.

The latest occurrence was after I had asking Copilot (Claude Haiki 4.5) to comment a script and discovered that it deleted quite a few lines of code and the script does not even compile. I know there is an option to run the script, but I always want to review the changes before doing that.

I would save quite a bit of time if I knew at the start that the results were garbage.

P.S.
I can't replicate the results persistently. I went through the process twice and it failed twice (and flashed up a message about fixing errors). But the script was using one of the extensions from 'My Extensions' and when I tried removing that dependency it produced compilable results and when I added it back in (effectively going back to the original script) it now compiles!

Answers

  • JoeAlbahari
    edited April 24

    It already works like this: the script automatically compiles after the model makes changes, and the model is given a chance to fix errors. If you still see errors, it's because it's emitted more bad code, or an invalid diff that it's been unable to correct. At some point, I'll be strengthening Copilot's resolve; for now, there are a couple of options:

    • Use Sonnet instead of Haiku; it's much less likely to get it wrong twice in a row
    • (in 9.8.6+) Use the AI Chat window - it starts an agentic loop and keeps going for as long as necessary.

    The AI Chat window can also now run the script (with your permission) so the model actually check the data as well as just looking at compilation errors. Everything is tools-based in the chat window when you use Copilot, so you pay for just one interaction per user message.

  • My expectation is that the compile check should be performed by Linqpad after the AI has returned the results (and hence would not depend on the model) and that this check would be performed before the 'Accept All' button would be displayed. If the script did not compile, then either immediately go back to the AI or if the AI is unable to fix the script to display the errors.

    I have been playing around with various models and got another unflagged compilation error returned from Opus 4.7 where it generated two helper functions like

    static double?  Round2(double? v)  => v.HasValue  ? Math.Round(v.Value, 2)                        : (double?)null;
    static decimal? Round2(decimal? v) => v.HasValue  ? Math.Round(v.Value, 2, MidpointRounding.AwayFromZero) : (decimal?)null;
    

    which doesn't compile.

    It was only one error this time and 'Fix this error with AI' did manage to fix this, but this is the type of extra step that I thought should be unnecessary.

  • This is in fact how it works already with the inline agent (and also with Chat, when Copilot is in use).

    With the inline agent, though, there's a limit on how many attempts it gets to fix errors, to avoid going in circles and wasting tokens. It's likely that you ran into the limit. LINQPad's Chat doesn't currently have a limit with Copilot.

    I'm currently planning to increase the limit by one or two iterations for the inline agent - and applying that same limit to Chat. This should hopefully prevent these kinds of errors without it running away. This will become important as of June 1, when GitHub Copilot switches to token-based billing.