Currently Reading: Data Abstractions and Problem Solving with C++: Walls And Mirrors (fourth edition)
I just finished reading the first chapter of this book, and did the first programming exercise it wanted me to do (the fist of two or three I will do, likely), even though I had a basic idea of how to solve it because I actually did write this before.
The problem is to write a function that computes the change a cashier needs to give to a customer. While this is something given to any first year CS student, because it is an awesome way to test a student's division and critical thinking skills, as well as his ability to code leglibly, it is also a real world problem because depending on where a cashier works, it is his till that might be handing out change. Some stores have self-checkout, and with the popularity of cash, not only must a computer register which dominations of coins to give change to, but also what denomination of bills to give change in.
I've solved this problem three times. The first time it was in illegible piece of shite that while I understood it while I was making it, I honestly couldn't understand it today even if I tried (and I have). Granted, I was a first year CS student at the time writing spaghetti in Java and calling my shit gold because it worked. I probably hadn't been coding for more than three months at the time. The concept of maningful variable names or well thought out code was unknown to me at the time.
The second time I solved it was when reading this book the first time (I had gotten to the end of chapter 2 before school got out), and I solved it for change, but not bills, and had used a different function for each computation, when one function would have done just as well. It worked. Except change for anything other than coins was given in a lot of coins. What I mean is that the second version would give 70 dollars worth in quarters when the person should get back a 50 and a 20.
The third time I solved it right, I think, because it includes bills and coins. I know the first one did this too, but it wasn't very well written. Actually, solving it is a simple task, requiring only a few steps
1) Take the amount owed and divide it by the value of the denomination (e.g. $75 / $ 20 = 3).
2) Subtract the denomination, times the quotient from the amount owed. This becomes the new amount (e.g. new amount = $75 - ($20 * 3).
3) Repeat step 1 with the new amount and the lower denomination (e.g. $15 / $10 = 1) until you reach the lowest denomination (which for Americans, is pennies. Unless we do away with them, and then we've fucked ourselves basically).
Design questions of course could be how to handle the data coming in. Since money consist of dollars and cents, it is common to think of handling money in terms of a float (32 bit number that can hold a decimal value, like 2.75, or 1.69, or 13.37, as opposed to integers, which are whole numbers), but this can be prone to roundoff errors and loss of precision. Another way to think of money is in terms of whole numbers, where a penny is 1, and a hundred is 10,000. I suspect this is how modern systems that handle cash deal with money, and while more complex to deal with, it offers a few advantages in that it would be less prone to errors (I remember the first time I solved this problem, I had to add 0.003 to the number just to make sure the correct amount of change was given, every time) because both dollars and cents are being treated as the same data type (in this case, ints, or whole numbers), which would mean the possibility of truncating a float's decimal value when converting to an integer would go away, and the problem is reduced to one of simple place-value.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment