Using AI as a reviewer to improve my writing

Caught by my own game?

That’s all you and your perceptions. I’m sharing the AI prompts and responses as related to the subject of this thread - demonstrating what it can and cannot do for the sake of the conversation.

I don’t really have a target audience here except that I am arguing against underestimating AI’s capabilities.

Personally for me it can be far more helpful for projects than the hardcover encyclopedias I had for reference as a kid. As a musician and artist of other sorts I understand how disruptive it is. Do you ever think of what recording and radio did to the market for live musicians?

Do you have a suggestion on how to counter progress? Something besides your recommendations for personally rejecting it?


An analogy: I have a few old guitar tube amplifiers. They are said to have the best tone and provide a feeling of pushback as you play that digital emulation could never copy. That changed a few years ago when digital technology caught up. Yes it’s still digital. No, it doesn’t have real warmth. Can you tell the difference by ear? Possibly live but not in a recording… 🤔


My opinion on AI doesn’t matter. It’s a fact that the quality of its response is improving from version to version and it is already good enough to save businesses money and to fool people or provide satisfactory results for many purposes now.

I think our world is coming to a crisis point with AI and robotics. Quantum computing is on the near horizon and will accelerate processing power and development in ways yet to be determined. It’s fascinating to me and it could easily become terrifying.


So, what “credibility” have I spent by sharing aspects of something that will continue to drastically affect many parts of our world?
I can't speak for anyone else, but I have started visually skipping past your posts, Alex. Scrolling past because I'm not interested in what your prompts generated. There's a non-zero chance I scrolled past some of your actual opinions, which is the risk.

I would be grateful if you stopped posting those, but who am I to tell anyone else how to conduct themselves?
 
I can't speak for anyone else, but I have started visually skipping past your posts, Alex. Scrolling past because I'm not interested in what your prompts generated. There's a non-zero chance I scrolled past some of your actual opinions, which is the risk.

I would be grateful if you stopped posting those, but who am I to tell anyone else how to conduct themselves?

Thanks for saying your peace without being rude about it. 👍

From here out if I include an AI response I will put it in a quote window.
 
I can't speak for anyone else, but I have started visually skipping past your posts, Alex. Scrolling past because I'm not interested in what your prompts generated. There's a non-zero chance I scrolled past some of your actual opinions, which is the risk.

I would be grateful if you stopped posting those, but who am I to tell anyone else how to conduct themselves?
+1. I've enjoyed conversations with Alex in the past and hope to do so in the future, but I have zero interest in the "I asked AI to respond to your post and here's what it wrote" genre of reply. I come here for human conversation; if I want to hear from GPT, I know where to find it.

This next bit isn't directed at Alex, just an observation of something I'm seeing elsewhere: some folk seem to be using "here's what GPT said about XYZ" as a high-tech version of the old "some people might say..." gambit. That is, they want to be able to present an opinion without being accountable for it. At my work, if somebody wants to use GPT to write a report or whatever, I expect them to take exactly the same responsibility for the content that they would if they had written it themselves; if it's inaccurate or misleading or offensive, "that was the AI not me" is no excuse. I apply that standard here too: own it, or don't post it.

(None of this applies to the situation where people are posting an AI-written excerpt in order to discuss what it tells us about AI writing etc. etc. - I'm just talking about the "here is an AI-written opinion which I may or may not agree with" style of post.)
 
+1. I've enjoyed conversations with Alex in the past and hope to do so in the future, but I have zero interest in the "I asked AI to respond to your post and here's what it wrote" genre of reply. I come here for human conversation; if I want to hear from GPT, I know where to find it.

This next bit isn't directed at Alex, just an observation of something I'm seeing elsewhere: some folk seem to be using "here's what GPT said about XYZ" as a high-tech version of the old "some people might say..." gambit. That is, they want to be able to present an opinion without being accountable for it. At my work, if somebody wants to use GPT to write a report or whatever, I expect them to take exactly the same responsibility for the content that they would if they had written it themselves; if it's inaccurate or misleading or offensive, "that was the AI not me" is no excuse. I apply that standard here too: own it, or don't post it.

(None of this applies to the situation where people are posting an AI-written excerpt in order to discuss what it tells us about AI writing etc. etc. - I'm just talking about the "here is an AI-written opinion which I may or may not agree with" style of post.)


This thread is specifically about AI responses and any value its feedback may have for an author. Pontificating only goes so far, and while I understand and agree about AI responses not normally being welcome in most discussions, in this case AI response is the subject in question.

Is there a better way to find out how AI may respond than seeing how it actually reacts to different prompts?
 
Last edited:
This thread is specifically about AI responses and any value its feedback may have for an author. Pontificating only goes so far, and while I understand and agree about AI responses not normally being welcome in most discussions, in this case AI response is the subject in question.

Note the "to your post" part of that sentence. I have no issue with "I asked GPT to critique my story, here's what it wrote, let's discuss whether it's useful feedback".

My issue is more with something like this post, where a human puts in the time and effort to give their two cents' worth on a topic, and somebody replies by asking an AI to respond to it and then pasting the response.
 
Back
Top