They are useful tools. I use copilot quite often in my work routine. Mostly to generate boiler plate code for me, add explanatory comments, review code for syntax and logic mistakes, etc. They can handle analysis and debugging quite well. They can usually write code based on plain language input if you can describe specifically what you need. And they can write documentation fairly well based on it’s own analysis of the code (though sometimes it’s missing context).
They’re still not a silver bullet by any means. If their training on a particular language is limited and/or documentation is not accessible, it often makes up stuff wholecloth that looks like it might work but isn’t correct syntax (it was basically useless with Dynatrace Query Language when I was learning the syntax last year). Sometimes it doesn’t follow instructions exactly. Sometimes even when just refactoring code like to reduce complexity it ends up making unintended changes to the logic. Sometimes I end up spending as much time or more debugging AI generated code as it would have taken to write it correctly the first time.
It’s handy, but it’s no silver bullet. The fact that these guys got something so novel and complicated out of it is quite impressive and probably required a lot of data input, precise mathematical instructions and, frankly, luck and a lot of iterations.
Yeah, to be fair I’ve had them do some pretty incredible stuff. I often need to spend some time finding its mistakes, making it fix them, refining my own verbiage, and coaching how it should be responding (so it doesn’t overwhelm itself). But it’s definitely helped me finish a month of work in a week.
Ok I for one was not expecting anything useful to come out of these tools
They are useful tools. I use copilot quite often in my work routine. Mostly to generate boiler plate code for me, add explanatory comments, review code for syntax and logic mistakes, etc. They can handle analysis and debugging quite well. They can usually write code based on plain language input if you can describe specifically what you need. And they can write documentation fairly well based on it’s own analysis of the code (though sometimes it’s missing context).
They’re still not a silver bullet by any means. If their training on a particular language is limited and/or documentation is not accessible, it often makes up stuff wholecloth that looks like it might work but isn’t correct syntax (it was basically useless with Dynatrace Query Language when I was learning the syntax last year). Sometimes it doesn’t follow instructions exactly. Sometimes even when just refactoring code like to reduce complexity it ends up making unintended changes to the logic. Sometimes I end up spending as much time or more debugging AI generated code as it would have taken to write it correctly the first time.
It’s handy, but it’s no silver bullet. The fact that these guys got something so novel and complicated out of it is quite impressive and probably required a lot of data input, precise mathematical instructions and, frankly, luck and a lot of iterations.
Yeah, to be fair I’ve had them do some pretty incredible stuff. I often need to spend some time finding its mistakes, making it fix them, refining my own verbiage, and coaching how it should be responding (so it doesn’t overwhelm itself). But it’s definitely helped me finish a month of work in a week.