This topic was automatically generated from Slack. You can find the original thread here.
im using the built in google palm step and seeing different results when i compare to makersuite. is there a way to see what model is being used? i dont seem to see it
im fixing to test w/ custom nodejs code, to see if it reflects the issue in seeing. basically, PD’s step is returning A on a prompt and MakerSuite, and local Node code, returns B with the same prompt
context: im asking it for blackjack advice and telling it my hand. for example, “I’ve got a QUEEN of SPADES and a 8 of DIAMONDS.” - and in the response, it says my hand trotals 19, not 18
Ah yes, gen AI and math aren’t always so strong. From what I remember it wasn’t until GPT v3.5 that it could count effectively. We instructed Pi to ignore all credit calculation questions because it was just terrible at math.
go home google your drunk: “I would recommend hitting. With a total of 18, you are only one point away from 21, which is the winning hand in blackjack. The dealer’s seven of hearts gives them a total of 17, which is below the 21 threshold. Therefore, there is a good chance that the dealer will bust if they hit, which would give you the win. However, if you stay, you will not be able to improve your hand and will have to hope that the dealer busts.”