• Re: ChatGPT Writing

    From Mortar@VERT/EOTLBBS to Nightfox on Tue Dec 30 00:33:14 2025
    Re: Re: ChatGPT Writing
    By: Nightfox to jimmylogan on Fri Dec 26 2025 17:40:25

    ...it won't always give the same output even with the same question asked multiple times.

    It it was truely AI, it would've said, "You've asked that three times. LEARN TO READ!"

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Rob Mccart@VERT/CAPCITY2 to MORTAR on Wed Dec 31 08:20:30 2025
    Reminds me of the old computer addage, "garbage in, garbage out".

    I think that saying works with People brains as well... B)

    ---
    þ SLMR Rob þ Another example of random unexplained synaptic firings
    þ Synchronet þ CAPCITY2 * Capitol City Online
  • From jimmylogan@VERT/DIGDIST to Nightfox on Sat Jan 24 13:14:03 2026
    Nightfox wrote to jimmylogan <=-

    Sorry - didn't mean to demand anything. I just meant the fact that someone says it gave false info doesn't mean it will ALWAYS give false info. The burdon is still on the user to verify output.

    Yeah, that's definitely the case. And that's true about it not always giving false info. From what I understand, AI tends to be non-deterministic in that it won't always give the same output even
    with the same question asked multiple times.

    Yep! I think that's part of the fuzzy logic. I've found that if I give it
    MORE context I get more specific answers to my current issue/condition.


    ... I put a dollar in one of those change machines...nothing changed.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Mortar on Sat Jan 24 13:14:03 2026
    Mortar wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Fri Dec 26 2025 17:08:43

    I've learned that part of getting the right info is to ask the right question, or ask it in the right way. :-)

    Reminds me of the old computer addage, "garbage in, garbage out". Or
    you could take the Steve Jobs approach: You're asking it wrong.

    Exactly!!!



    ... As I said before, I never repeat myself
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Mortar on Sat Jan 24 13:14:03 2026
    Mortar wrote to Nightfox <=-

    Re: Re: ChatGPT Writing
    By: Nightfox to jimmylogan on Fri Dec 26 2025 17:40:25

    ...it won't always give the same output even with the same question asked multiple times.

    It it was truely AI, it would've said, "You've asked that three times. LEARN TO READ!"

    LOL - good point!!!



    ... We all live in a yellow subroutine...
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Sat Jan 24 16:39:52 2026
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Sat Jan 24 2026 01:14 pm

    Yep! I think that's part of the fuzzy logic. I've found that if I give it MORE context I get more specific answers to my current issue/condition.

    Yep. I've heard of "prompt engineering" referring to making good prompt for AI to get what you need. I've also heard you can go a step further and give the AI some criteria and have it iterate over its answers and check itself against your criteria; it may take a little longer, but in that way, you're basically making it do more of the work to get an answer that's helpful to you.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Mortar@VERT/EOTLBBS to . on Sat Jan 24 20:20:20 2026
    Re: Re: ChatGPT Writing
    By: jimmylogan to Mortar on Sat Jan 24 2026 13:14:03

    ... We all live in a yellow subroutine...

    Next time, install a bathroom.

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Sun Mar 8 19:18:28 2026
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Sat Jan 24 2026 01:14 pm

    Yep! I think that's part of the fuzzy logic. I've found that if I give it MORE context I get more specific answers to my current issue/condition.

    Yep. I've heard of "prompt engineering" referring to making good
    prompt for AI to get what you need. I've also heard you can go a step further and give the AI some criteria and have it iterate over its
    answers and check itself against your criteria; it may take a little longer, but in that way, you're basically making it do more of the work
    to get an answer that's helpful to you.

    Yes, and I've also heard (haven't tried it) from a friend that there is a particular
    one now that you give it the documents/sources you choose and it ONLY works from
    those...



    ... I broke a mirror & got 7 years bad luck. My lawyer says he can get me 5. --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com