I’m really not that interested in talking about this again, but I read a passage (about an entirely different topic) that made me think about this. Here’s the passage:

You can flip a coin 100 times and have it come up tails 60 times. It’s an unlikely result, but it happens. About 3 percent of the time, you’ll get 60 or more tails.

You can flip it again the next day, and it can come up tails 60 times again. This is possible, and no doubt has been done before. (Again. Don’t do this. It is so boring! Read more football instead. That’s a good use of your limited time on our doomed planet.)

If you flip the coin 100 times for 100 days, though, you’re not going to keep getting tails 60 percent of the time. Regression, the most powerful arm of probability, will tug the percentage back toward 50 percent. It just will. It doesn’t have a choice, and neither does the coin.

I’ll go over what stands out in the comment below.

3%? That seems really low for 60 of one side. I would think the chances that happens would be a lot higher. What about 59 of one side? 58? 57? I would think there must be some big jump in percentages at some point. The percentage of seeing 51/49 must be pretty high. Same with 52. What does the drop in percentages look like, getting to 60/40?

Let’s say the drop occurs between 54/46 and 55/45, going from 40% to 10%. If this were true, suppose you flipped the coin 95 times and you already had 54 of one side. Wouldn’t your chances of getting that side in the next 5 flips be a lot lower?

The other thing that stood out: the concept of regression to the mean seems like the one I’m using to argue my position.

I’ll address some of your other points later, but it’s important for you to understand that the limit on this condition makes 3% far more reasonable than some number close to 50%.

You’re specifying HEADS: 60 heads, not 60 heads or 60 tails. You can see how this makes a difference just with flipping two coins. If you flip 2 coins, the odds of them being the same are 50% and the odds of them being different are 50%. But the odds of them being BOTH HEADS are 1/4, or 25%, and the odds of them being BOTH TAILS are 1/4, or 25%. Just with two coins, you’ve cut the likelihood of getting your desired result in half, from 50% to 25%.

The math isn’t exactly parallel, because in the scenario you describe, each flip is independent of the other 99, and the 60+ heads can be in any order (they don’t have to be on specific flips), but you’re still talking about a LOT of flips (100) where you’re looking for a range of specific numbers (60 to 100) of specific outcomes (heads).

Rather than explain the math, I thought I’d just show you a demo. Here is a spreadsheet I made simulating 100 coin flips, 220 different times. I’m calling each set of 100 flips a “trial” for clarity’s sake. Each trial is numbered at the top, mostly for my own clarity. The smaller number beneath each trial number is the number of HEADS flipped in the trial below.

The red number in the upper left is the number of trials where 60 or more HEADS were flipped out of 100 tosses. As I’m looking right now, it says 7. The sheet is set to auto-recalculate, so if you refresh the page, when it loads, the flips will randomly re-simulate, and you’ll see different results (give it 10 seconds or so after the refresh; it takes a moment). I refreshed five times and the red number moved between 5 and 8. 220 is a very small sample size, so that kind of fluctuation is to be expected, but you’ll probably only see numbers pretty close to this range.

The blue number at the very top is the percentage (out of 220 trials) where 100 flips resulted in 60 or more heads.

Right now I have it set to let you look only. But if you want to play around with it (make it 61 or more heads, or 57 or more heads), I can give you editing privileges. You’d only have to change two cells to do it.

I’m not finished thinking about this, but something occurred to me, after thinking about what you said below:

I think this is helpful. Perhaps a better way to think about this is 60 heads

or60 tails–versus only one of these outcomes. This might not be clear, but let me explain how I arrived at this. While looking at your spreadsheet, I noticed that 39-38 heads seemed rare as well. 39 heads basically means 61 tails. I guess this isn’t a revelation so much something that made your example a lot clearer and more palpable.What would be the odds of getting either 60 heads

ortails? If this number is closer to the 3% than 50% that would be a little weird, too, right?I’m just guessing, but I would expect the odds of getting 60+ heads OR 60+ tails would be around 6%. Going to adjust the sheet and test it now.

Okay the sheet now shows the number of tails vs. the number of heads, and a total number of trials with either 60+ heads or 60+ tails. Looks like about 6% to me.

I admit if I were to just estimate what the probability would be, there’s no way I would have come up with anything near 3%. I might have guessed 25% or so, but when I thought about it 3% sounded much more likely, ‘though it still seems low to me too.

That makes sense. 6% still seems surprisingly low, right? Or is it just me?

I’m mildly interested in doing this, as long as it’s not too complicated. I’m interested in seeing if a big jump occurs or not between 50 and 60. If you don’t mind, check how many times 55 and 45 shows up.

I have it displaying the number of trials where there are 59+ heads or tails right now. It’s easy to adjust. Click the cell where the big red number is, change where it says “59” to whatever number you want, and hit enter. Then do the same thing for the cell where the big green number is. You don’t have to change the cells in the top row where they say 60; just keep in mind that they will display results for whatever number you’re changing to, not 60 anymore.

It didn’t allow me to do this.

By the way, any predictions about the way the percentages will change, as they get closer to 50 flips of heads or tails? What would you predict as the flips go farther from 60? For example, would 70 flips be closer to 0% or 3%?

Try it now. And yes, if the relationship isn’t exponential, I suspect it’s at least kind of exponential (I’m saying this because I haven’t worked out the exact math yet), the way the two-heads-with-two-coins thing is exponential, which would mean a steeper increase with every number beyond 50. 2 squared is 4, 3 squared is 9, 4 squared is 16, etc. But I don’t think this condition is that steep.

(OK, the problem was that I didn’t log in, I think. I will try to do this later.)

No; the problem was that I had editing turned off except for me.

Wait, I just realized that 59+ means “>=59.” Shouldn’t we just look at 59, and not all the numbers that exceed it?

I changed the number to “>=55” and using both heads and tails, it’s about 30% which seems like a huge jump.

I now adjusted it to “=55.” The numbers are lower–4.55% each or 9.1%, which doesn’t seem like that big of a jump.

At 51, the percentage is 4.09 and 4.55, which comes out to 8.6%, close enough. That’s seems low. What could be going on? My thought is that basically, from about 45-55 (heads or tails), the percentages are the same. My error is thinking that getting 50 heads or tails is significantly more likely than 55, 48, or any number between 45-55. (I didn’t really plug in 45, but I’m just assuming.)

It should be noted that 220 flips isn’t all that much, but I wouldn’t expect the percentages to be dramatically different. Or is this wrong? If we did a million flips, what’s the chances that the percentages would be dramatically different.

Still, should we think that a jump from 9% to 3% is not significant? 9% versus something closer to 0% does seem significant. If, on the first round of a 100 flips, you had 60 of either heads or tails, the odds of getting 65 (let’s say) would be closer to zero. So if you had 64 of either heads or tails and had a few more flips to go, wouldn’t we expect the the other side of the coin? That is, those last flips wouldn’t be 50/50 probability.

I’m sorry to resurrect a dead horse, but a conversation we had several years ago about iTunes in shuffle mode keeps jumping into my head from time to time, and I think it reveals a little of our problem here.

Part 1: Randomess

We weren’t talking about randomness; we were talking about iTunes. But I mentioned how a lot of people have complained about how shuffle mode often brings up the same artist twice in a row, or three times over nine songs. Apple has responded by making these kinds of repeats less likely.

I said, “What’s interesting is that people say they want randomness, but what they’re actually asking for is less randomness.”

You said, “You mean MORE randomness. People want more randomness in shuffle mode.”

I didn’t say anything then because I didn’t want to get into a math conversation in the middle of a music or tech conversation, but if you actually did feel this way, it reveals you don’t understand randomness by definition.

There’s a difference between “mixed” and “random,” or even between “mixed” and “shuffled.”

Randomness dictates that if you listen long enough, you’ll eventually hear the same artist played ten times in a row, or your entire library played exactly in alphabetical order, or even the same song played a hundred times in a row. This is because randomness says each selection is influenced by no consideration other than chance.

When we ask iTunes to give us a good mix without repeating artists or songs too near each other, we’re intervening in randomness, or making it not truly random. By definition, randomness

meansnot influenced by something coming before or after.Part 2: Is a coin toss random?

It’s entirely conceivable that a coin toss is not random, that preceding results could give a hint to forthcoming results. In some ways, this is how blackjack (the card game) works. It’s largely determined by chance, but since some cards are turned up as the game is played, what happens first (the dealer’s upturned card is an ace, for example) influences your prediction of what comes next (your chances of getting an ace are lowered).

If a coin toss is not truly (or at least not practically) random, it ought to imply other things we think of as random are not actually random. I can live with this. Science wants us to believe, for example, that the spontaneous generation of life was random millions of years ago. I don’t subscribe to this idea, but I also acknowledge that my interpretation of how life came to be has no basis in science. And (I think most importantly) I’m open to the possibility that life did come into existence by chance. It doesn’t change what I believe about creation.

Part 3: The math of probability is a DESCRIPTOR not a DEFINER

When we use math to predict the likelihood of flipping ten tails in a row, it’s very important to understand something I never thought to explain, to my students or to certain friends.

Math doesn’t determine what happens. Math helps us describe what happens, and predict what will happen. Example: if I’m moving at a rate of sixty miles per hour, we use math to determine that in three hours, I’ll have moved 180 miles. The math didn’t make it happen; the math described what happened, and it can reliably predict that if I go another hour at the same rate, I’ll have gone 240 miles.

The math of probability doesn’t tell us that each coin flip is independent of the coin flip right before it, or the ten coin flips right before it. We have seen by observation that it doesn’t, and this makes the math somewhat reliable about what’s likely to happen next, over time.

Don’t just nod and agree. Really absorb this if you haven’t before. When scientists shoot a rover at Mars, the math they’re using describes what will happen; it doesn’t make it happen.

So if a person disagrees with the math of probability, the person is either disagreeing that the specific math describes the situation (which most of us aren’t really equipped to do if we haven’t deeply studied math), which is to say there is a flaw in the math, or the wrong math is being used, OR the person is disagreeing that math can be applied in this situation.

TANGENT: I saw a story on the ABC News last night where a reporter interviewed some healthcare workers who haven’t taken the vaccine yet. One of them, a nurse, said the vaccine was rolled out too quickly, and people can SAY it’s safe and effective, but she needs to “do the research herself” first. This is a gross misunderstanding of the word “research” as it applies to this kind of science, because unless she has a lab and an understanding of immunology and virology, she can never do the research and therefore will never get a vaccine. This is relevant: most of us can question the math, but that’s as far as we can go because we’re unwilling or unable to actually study it in order to refute it. It can be a tricky way to live, because it implies we can never trust knowledge unless we ourselves understand it: the earth is flat, six million Jews didn’t die, and the election was stolen.

I break it down this way because I believe we can better address the problem if we understand where you and I aren’t agreeing. Do you think randomness doesn’t truly exist? Do you think it exists but coin tosses aren’t random? Or do you think the math of randomness is flawed in how it describes coin tosses?

To answer your point in your original post about regression to the mean: regression to the mean isn’t a concept dictating what WILL happen, but a description of what happens over time. If you believe 60 out of 100 heads is simply chance, you can’t believe a regression to the mean is NOT chance. If you believe regression to the mean is a kind of regulating factor, you have to believe the original anomoly was influenced by something other than chance.

That may be confusing, but remember: we’re talking about the NEXT coin flip. This was our original disagreement. If you believe the NEXT coin toss is SLIGHTLY more likely to be tails, you also believe SOMETHING other than chance influenced the coin tosses before. In this case, the discussion is simply an impasse.

I think I have a better understanding of your thinking on specific point you made above, and I want to check to see if you agree:

Here’s what I think you mean: a truly random process would sometimes lead to non-mixed results (e.g., same artist twice in a row). Therefore, when someone complains about these non-mixed results from itunes shuffle, and claims to want the list to be more random, you’re saying they actually want less randomness because non-mixed moments also naturally result from a random process. Ergo, trying to eliminate non-mixed qualities would actually make a list less random.

Am I understanding you properly?

Yes. I was thinking of this when I wrote my most recent response but I didn’t want to repeat myself. Thanks for remembering it.

I didn’t mention this part of the anecdote because I was certain it wouldn’t help. But the person I most recently (at the time) had the conversation with was a math teacher where I taught. A really good one. He’s also a huge fan of the same music I listened to in the 80s (he’s a year older than us), so we talk about music a lot.

He complained about iTunes in shuffle mode and I said the thing I said, about him wanting it to be less random. He paused just a second with a faraway look in his eyes and then smiled. “I guess so!” he said. Another math teacher in the room (his wife) smiled too. It felt good to make the point with them. 🙂

OK, thanks for the response.

I don’t think he’s wrong–nor do I think you’re necessarily wrong. It just depends on one’s perspective. The process can be random in the way you describe, but the actual results may not be random (based on the dictionary definition). I would guess most people are using the dictionary definition. What I don’t get is why you think they’re wrong for doing this.

I also want to ask another question: While a random process can lead to non-mixed results, would you say that the degree and frequency to which the results are non-mixed has to be somewhat limited? An occasional short sequence that are non-mixed in character is totally acceptable and ordinary–it would be consonant with a random process. But wouldn’t there be a point that would create dissonance, so to speak–i.e., we’d have legitimate reasons to question the randomness of the process? For example, if I listened to a 100 songs a day via shuffle, I wouldn’t see a non-mixed sequence of 30 or more songs per day (e.g, 30 songs in alphabetical order, 30 different version of the same song, in sequence, etc.). A truly random process would not produce these results–or at least I don’t think they would.*

*One problem with this discussion is we’re both speculating and/or speaking in theoretical terms–versus basing our opinions on empirical data–i.e., actually monitoring and recording frequencies and degrees of non-mixed and mixed results from itunes shuffle. It’s possible that the non-mixed results occur to a greater degree and frequency than I assume. It’s also possible that my sense is accurate or even an underestimation.

I was going go through your post and comment or ask questions as they came up, but I think that would result in a sprawling post. I’m going just answer the questions you ask at the end:

If I tried to answer this in the simplest, clearest way I can, I would say the coin tosses are not random–not in the way you define it. And I actually question whether such randomness exists–in the real world. (I might agree with the third option as well, and maybe that will become clear in what I will say next.)

At this point, my sense is that the main problem is the concept of infinity. I think your definition of randomness depends on an infinite range. For example, hearing all my music by albums, in the proper song sequence of each album, via shuffle mode, is not remarkable if I listened to music an infinite number of hours. Within an infinite number of hours, this could happen many times. But hearing my music this way, via shuffle, over a 100,000 hours would be extraordinary, and it may not even happen once.

I’ll pause here as I think this could clear things up.

(On a side note, why did you say this: “Math doesn’t determine what happens.” Are you thinking that I’m thinking math determines or causes something to happen? I don’t think I’m thinking this way. Indeed, I’m not even sure what math causing something really means.)

Okay. This pretty much settles things, then. I don’t find fault with this belief, and I think if I had figured it out years ago, we wouldn’t still be on this. Do you believe that if randomness doesn’t truly exist, there are situations that tend toward randomness? Example: the flip of a coin may not be random, but as coin flips accumulate, there’s a tendency to randomness?

This is a valid thought. I’ll just gently remind you that ANY specific, predetermined or predefined order would be AMAZINGly remarkable in shuffle mode over 100,000 hours. Quickly: the odds of hearing these ten songs in order in shuffle mode

One by Metallica

Two Tickets to Paradise by Eddie Money

Three Little Birds by Bob Marley

4th of July, Asbury Park (Sandy) by Bruce Springsteen

Take 5 by the Dave Brubeck Quartet

6 O’Clock Bad News by Cecilio and Kapono

Seven Nation Army by the White Stripes

Eight Days a Week by the Beatles

9 to 5 by Dolly Parton

Ten Years Gone by Led Zeppelin

…are exactly the same as hearing THESE ten songs in order in shuffle mode

The Best of Times by Styx

Love to Love You Baby by Donna Summer

Linus and Lucy by the Vince Guaraldi Trio

Blind Man in the Bleachers by Loyal Garner

Everyday by Buddy Holly

Sukiyaki by Kyuu Sakamoto

Drilled to Kill by Michael Schenker Group

Eat It by Weird Al Yankovic

Out of the Blue by Debbie Gibson

Morning Like This by Sandi Patti

You would NOTICE the first set of songs because how strange! But the odds for the second list are the same, no matter how large or small your music library. Agree?

A couple of reasons. First, I think you have an unhealthy distrust of math because you claim not to understand it. If we remove the math entirely as a factor in the probability, maybe we eliminate one of your issues with the problem.

I know you don’t think the math dictates the outcome. The other reason I mention this is the math is descriptive, not prescriptive. We often plug info into formulas to get answers, thinking the math generates the reality. In fact, the equation is descriptive of a relationship, and plugging the variables in is simply looking up specific values in the relationship, like looking in the R section of a dictionary for the definition of “rambunctious.” We use math to look up the relationship between X and Y. The relationship already exists; we’re just finding a specific part of the relationship in the dictionary of the universe.

So when I point to the math of probability and say “this is why ________,” I’m pointing to established, observed truth. The reason I know the 11th toss is as likely to be heads as tails after 10 heads is observed history, which indicates that the 11th toss is independent of the 10 preceding.

To be more precise, I’m not saying randomness doesn’t exist—I’m saying infinity doesn’t exist. Or to be more specific, infinity doesn’t actually apply to coin flips or the time listening to music. Coins can’t be flipped an infinite amount of times; coins are finite. So, I’m doubting randomness that is bound by infinity. I believe randomness exists, but it’s not bound by infinity. My sense is that these two definitions are really different. And my sense is that you and others are utilizing the first definition, when looking at coin flips, and I’m suggesting that is wrong—or at least it seems wrong.

If we are talking about an infinite amount of listening hours, I would agree. If we’re talking about infinity, the first sequence could occur over and over again—millions or billions of times—and it would not be odd.

But if we’re talking about 100,000 hours or a million hours, the first sequence would be extraordinary—and maybe even highly unlikely. Right? If you agree with that, do you agree that randomness based on infinity doesn’t exist in the real world—in space and time? If you disagree, I’d like to hear your explanation as to why you think it does exist in the real world.

OK, but why do you think I have an “unhealthy distrust of math” versus just not being good at it? I don’t have an affinity for math (What’s the opposite of affinity?). My intuitive sense of it is really bad. I don’t hear harmony—I’m almost tone deaf. In a lot of ways, I think I’m “math deaf,” too.

I was wondering if that was your thinking, so thanks for clarifying this.

Math dictating the outcome and math being prescriptive seems like the same thing. How is it different?

This seems wrong. If 10 of the preceeding tosses were all heads, wouldn’t “observed history” suggest that that the next coin toss would be tails—or that tails would soon appear? Suppose I flipped a coin a billion times, and 11 coins of the same side occurred about 1,000 times. Based on this (observed history), wouldn’t we guess the next flip would be tails? Your belief the next toss is heads is based on probability—math—and the notion that the range of flips is infinity. 11 flips of the same side is nothing within an infinite number of flips. 1 trillion flips resulting in the same side would be unremarkable as well. We don’t have to go into this again, but I’m objecting to the use of the word “observed.” I think using observation, and the history of coin tosses, would lead to the conclusions I’ve suggested not one that is commonly accepted. Or do you disagree with that?

Yeah, I’ve already removed the infinity consideration from this conversation. You made it clear a long time ago that you’re limiting the discussion to the real world.

Okay, in my example of the two playlists, I’m removing infinity from the conversation (‘though for the record my position — and the mathematical position — is the same whether we’re talking infinite or finite).

So, in the real world, it doesn’t even matter if we’re talking 100,000 hours of listening or 10,000 hours of listening. The likelihood of hearing the first sequence is exactly the same as the likelihood of hearing the second. You are correct: the first sequence would be extraordinary and you should call all your friend and tell them if you hear these songs this way. However, it would be

equallyextraordinary to hear the second sequence. iTunes doesn’t ascribe any meaning to the titles of songs (at least I don’t think it does). To iTunes, each sequence is an arbitrarily determined order of songs — nothing amazing about either one. Do you agree?I’ll agree with you that randomness in the real world can’t be proven, but I lean toward believing in it, based on what I understand about math.

Here’s a quick example. In December, you may remember that we had a convergence of planets in our sky, making what some were calling the “Christmas star.” From earth’s vantage, it looked like three planets were kind of joining to form one bright light.

Scientists knew it was happening because of math. They could accurately predict the convergence because the movement of the planets around the sun (and in relation to earth) is constant. Because of this, they can predict when the next one will be, and they can also reliably estimate when it happened last — thousands of years ago, it turns out.

Some bozos on social media challenged the thousands-of-years-ago part. They said since none of us was here, we can’t say when it happened last. The scientists explained that since the movement has proven to be constant, they could wind it back as far as they needed. The math is accurate.

The response? “Yeah, but you weren’t actually there and there’s no documentation, so you can’t prove it.”

The bozos on social media were actually right from a certain perspective. If, for example, they don’t believe the earth and all of the universe didn’t exist billions of years ago, it doesn’t matter what the math says: there couldn’t have been a convergence if none of this even existed.

But science doesn’t play that game. Until some kind of creation can be proven, they have to go with the most likely scenario based on what they’ve seen in the real world.

It may be impossible to prove to you randomness exists. But indications are that things TEND toward randomness, which by itself kind of indicates that randomness does exist. They even taught us to include it (in the form of entropy) in our formulas in tenth grade chemistry class.

I lean toward accepting science while acknowledging the real possibility of magic. I use “magic” in a kind of literary sense, to include fate or God or some other supernatural force. I just want you to realize that by eschewing the science, you’re ascribing some other, non-natural force to the result of a coin toss, which even my resurrection-believing faith can’t allow.

Because your response to what appears to

meto be undeniable science is to go with a preconceived notion ofshould. You insist that the likelihood of a coin landing tails after several heads “has to” “pull toward” tails. Call it instinct, gut reaction, or belief in some other power, but it’s a distrust in math. I don’t think it has anything to do with not having an affinity for it, or not being good at it.Man, I don’t believe planes can fly. So I’m with you. A tiny part of me freaks out all the time when I get on a plane, especially one going across the Pacific. Sometimes not so tiny. I force it down based on evidence, but it’s not easy most of the time. But my non-understanding doesn’t result in my denying that planes fly, and that people a lot smarter than me understand how and why.

Maybe I’m wrong. And maybe it doesn’t matter, because perhaps where it really comes down to mattering, you do trust the math — as when you get on a plane, drive your car over a bridge, or go up to the 20th floor of a building. I just think (and again, maybe I’m wrong) the math that lets a plane fly, a bridge hold up a bunch of vehicles, and a skyscraper sway in the wind without crashing down is the same math that describes the likelihood of a coin toss.

They are the same thing. I’m saying math doesn’t dictate the outcome and it’s not prescriptive.

No. History indicates that the likelihood of tails on the next toss is equal to the likelihood of tails. History tells us that the result of each coin toss is independent of the tosses before it. I understand that you don’t believe this.

No, because the 12th flip would be tails half the time, and the other half of the time they’d result in a 12th head. Out of those billion coin flips (and why are we going to infinity now?), if 11 heads occured 1000 times, you can pretty much bet that 12 or more heads occured 500 times, and 13 ore more heads occured 250 times, because half the time, the 12th toss would be heads. Again.

Again, while I’m acknowledging that infinity doesn’t exist in our real world, we can reliably make predictions based on trends toward some undetermined future.

But even not doing that, we can observe that each coin flip is independent of the coin flips before it, whether the preceding coin flips were five consecutive heads, five consecutive tails, or a mix.

No, I don’t. We agree the first list would be amazing–precisely because the order is not arbitrary. Itunes shuffle may be choosing it in an arbitrary way, but if that’s the case, we would not expect that first list. On the other hand, the second list does seem arbitrary. Now, if that exact list appeared again the next time I listened to itunes–and appeared again a few days later–that would be extraordinary.

I am confused by your definition of randomness, so I went back up to look for it. Here’s a paragraph where you touch on this:

If you say, I produced the second list by chance, no one would have a problem believing this, but most people would have a problem believing the first list was produced by chance.

But you add the qualifier–“if you listen long enough.” It’s this phrase that sounds like you’re referring to infinity or something close to it. But you’re ruling out infinity, so I would question whether this notion of “long enough” is sufficient. This is difficult to answer, but what’s a ballpark number one could expect to hear the first list? In other words, give me an idea of what is long enough? Does this (non-infinite) number exist? I’m particularly interested in the way math or science provides this answer, if it does at all. And if it doesn’t, wouldn’t that bring down your definition? My sense is a lot hinges on this answer. I think we may not be disagreeing that randomness exists, but that this notion of “long enough” exists.

Also, “long enough” implies that if one fails to listen long enough one is unlikely to hear the same artist ten times in a row, right? So how do you know this sufficient duration exists? How do you determine it (since it’s not infinity)?

By the way, I would say the qualifier doesn’t need to be invoked for something like the second list. And if not explain why we would need this qualifier. Now, if we want to hear that exact list, some time in the future, the qualifier would need to be invoked. That is, if we “listen long enough” yes we can expect to hear it again. But if we don’t listen long enough, it’s not likely. Or do you disagree?

I wanted to respond to some of the other points, but I’m going to stop here and focus on the points above.

(On a related note to this notion of a sufficient limit (“long enough), here are some other ways that manifest the importance of this point:

1. If I listen to 8 hours to itunes shuffle, I think we would agree that is not long enough to hear your first list;

2. But one could say, “Don’t think of 8 hours as the limit. You have to include the hours after that.” That is, suppose I listen to itunes shuffle 8 hours every day. I need to include all those hours. If do, not hearing your first list may not be strange because the limit or range is actually longer.

3. Let me pause here: Is expanding the limit/range in this way appropriate? Why or why not? (I haven’t thought of an answer for this.)

4. Now, we could expand the range/limit even more to the amount of hours a human being could listen to music over their lifespan. But suppose we surmised that limit/range wasn’t sufficient to hear list one?

5. We could expand the limit/range to go beyond the possible listening hours for one individual and make it generational….If we made this move, it feels like we want to approximate infinity. The number may not be infinity, but it’s large enough to give a similar effect. Do you think that’s what you’re doing when you say “long enough?”

6. On a related note, what constitutes “long enough” depends on the degree to which something seems planned or seems to contain inherent structure or design. For example, hearing all my albums in alphabetical order with the proper song order of each album would be way more improbable than hearing songs by the same artist three times in a row. The duration needed for the first would be way longer than the second situation. (Such a number for the first situation may not exist.)

7. Something I realized, and this may not be crucial. I use the word “improbable” above. But this is not a mathematical calculation. I’m saying the situation seems highly unlikely because a chance operation would not be able to produce such a situation. But I don’t think this sense is based on math or science.

Randomness continuedI want to expand on the sixth point above–namely, “what constitutes ‘long enough’ depends on the degree to which something seems planned or seems to contain inherent structure or design.” What I’m getting at is the notion that there are degrees of randomness–or degrees of un-randomness. Before I get into that, I looked up the definition of “random” and here are some definitions:

“a haphazard course”

at random–“without definite aim, direction, rule, or method”

“lacking a definite plan, purpose, or pattern”

Hearing two songs in a row by the same artist or three songs by the same artist out of nine songs doesn’t suggest a pattern or something that isn’t haphazard. At the same time, this would not be egregiously un-random. On the other hand, hearing an album in its proper song order would be egregiously un-random. This degree of difference in randomness is important. A degree of un-randomness is not unusual; in fact, absolute randomness seems unnatural. But there’s a point where something can be too un-random–and we start to question whether there is an underlying pattern or plan.

Another important point in the itunes example: are number of songs for each musician relatively equal? If one or two artists have far more songs than other artists, hearing more of their songs in a shuffle mode (that was random) would be expected.

I appreciate your lengthy response, but I think you’re not understanding how randomness works — at least randomness according to a textbook definition. It makes sense since you don’t believe in it, but in order for my point to be clear, and to answer your questions and objections, I’m going to have to illustrate why the likelihood of hearing both lists is exactly the same. But to do that, I have to reduce the scale.

So let’s reduce the playlist to only 10 songs, played in shuffle mode. These 10:

One by Metallica

Two Tickets to Paradise by Eddie Money

Three Little Birds by Bob Marley

4th of July, Asbury Park (Sandy) by Bruce Springsteen

Take 5 by the Dave Brubeck Quartet

Sukiyaki by Kyuu Sakamoto

Drilled to Kill by Michael Schenker Group

Eat It by Weird Al Yankovic

Out of the Blue by Debbie Gibson

Morning Like This by Sandi Patti

Assumptions: (1) we set iTunes not to repeat any song until the entire list is played through and (2) iTunes is capable of true randomization.

The likelihood this will happen exactly like this is 1 in 30,240. So it would take very long and it’s extremely unlikely, even with a playlist of only 10 songs.

The likelihood this will happen exactly like this is 1 in 30,240. So it would take very long and it’s extremely unlikely, even with a playlist of only 10 songs. But as you can see, it’s the same likelihood as the first list.

iTunes doesn’t know what the song titles are. This is just a list of ten files it is asked to play in any order. The only reason you’d think the first list is more extraordinary is that you bring external context and meaning to the first list, while the second has no meaning at all. It would take very long, but definitely, finitely long to hear

I asked: “To iTunes, each sequence is an arbitrarily determined order of songs — nothing amazing about either one. Do you agree?”

You answered: “No, I don’t. We agree the first list would be amazing–precisely because the order is not arbitrary. Itunes shuffle may be choosing it in an arbitrary way, but if that’s the case, we would not expect that first list. On the other hand, the second list does seem arbitrary. Now, if that exact list appeared again the next time I listened to itunes–and appeared again a few days later–that would be extraordinary.”

Do you still feel the same way?

Right, and that fits with the dictionary definitions of randomness that I listed in the previous post–i.e.,

“a haphazard course”

at random–“without definite aim, direction, rule, or method”

“lacking a definite plan, purpose, or pattern.”

Based on these definitions, the first list is does seem random, while the second one (as far as I can tell) does. (Edit: The first list does NOT seem random.) Based on what you’ve said, it seems like you don’t think these definitions apply–or maybe you think the use of probabilities provide a truer or superior definition of randomness?

Here’s something else I’ll throw out there. The second list, in one way, is equally probable as the first. However, and I won’t be explaining this well, but the second list, in another way, represents or is the “same” as many other lists. Take the following list:

Miles–Miles Davis

Somewhere Over the Rainbow-Iz

Saturday Night–Bay City Rollers

I’ve Got You Under My Skin–Frank Sinatra

Drive–The Cars

or

With or Without You–U2

Rasberry Beret–Prince

Ventura Highway–America

She Loves You–The Beatles

Iron Man–Black Sabbath

I’m not explaining this well, but I feel like the actual probability for lists that are truly haphazard, having no pattern or structure are greater than lists that have a pattern, purpose, or structure–like the first list. Do you see what I’m saying? It would be difficult to get the exact lists to appear from itunes shuffle, but lists like this that have no apparent pattern or structure, it would not be that hard. Do you see what I’m saying?

Yes, I thought of this when I wrote this response, but can you see that the external context has nothing to do with the likelihood of its happening?

These definitions do NOT describe the first list because to call it anything other than haphazard or random would be to assign some kind of intent to iTunes, which it does not have.

I think this comes down to your mixing up a common use of “random” with the actual meaning, something I’ve fought against in general conversation ever since I became a math teacher. Random and arbitrary are not synomyms. When people say they have a random thought, they actually mean tangential or non sequitur. They don’t mean random because our brains don’t work randomly.

When we talk about the likelihood of something in the context of a coin toss, we’re not using these societal ideas of “random”ness. Or, one of us is, and that’s why we can’t agree on this problem.

Think of it this way: if these ten song titles were written on separate slips of paper and thrown into a hat, and you drew them blindfolded, if your first five draws were in the order of the first list, it’s random because your selecting them was haphazard: your TRYING to pull those titles in that order wouldn’t make it more likely, unless the NBA was involved and froze some of the slips of paper.

I considered your other lists as something to bring up, but I didn’t want to complicate my main point, which you still don’t agree with, so I’ll take it on now.

Let’s say your list beginning with U2 is made up entirely of songs EXACTLY three minutes and forty-three seconds in length. Is the list less haphazard or arbitrary now? Or what if each song was recorded in Electric Ladyland Studios in New York and mastered by the same sound engineer? All these contexts make the list more notable in some contexts, but none of it makes it less random than any of the lists we’ve shared.

Tangential thought: iTunes will make a smart playlist of all the songs in your library exactly 3:43 in length. I’ve done it for a specific reason.

And while I’m stirring up the silt, I have another question.

Let’s say one of your children is ill. All the medical literature says a certain surgery will take care of it, and it has a 98% success rate, and the success or failure isn’t related to the doctor’s skill or anything genetic. It seems to be pure chance. In 2% of cases, the surgery makes the condition worse.

There are two local surgeons who do the procedure. One of them has done 199 operations: the first 195 were successful, but the most recent 4 weren’t.

The second surgeon has also done 199 operations, and they were all successful: no failures.

All other things being equal (such as your out-of-pocket costs or your comfort with the doctors), do you feel safer asking the first doctor or the second to operate on your child?

Yes, but here’s where I see a problem in your example (and I tried to touch on this in the previous post). I assume List 2 is supposed to represent a list made haphazardly—where no criteria or principle is at play in the selection and sequencing of the songs. But generating that

specific listby chance would be difficult and rare. However, instead of choosing specific songs in a specific sequence, we should simply describe list 2 as a list of songs that do NOT utilize criteria in their selection (e.g., songs from 1971, ballads, etc.) or principles that will determine the sequence of the list (e.g., alphabetical order, shortest to longest duration, etc.) The specific list you gave (list 2) is only one of many, many different examples of that. If I listen to itunes shuffle one hour every day, I would likely get this type of list almost every day. That is, a sequencelikethe one in list 2 would be common, would occur with high probability. However, it would be remarkable if a list like list 1 occurred even once.In my view, I think we should calculate two probabilities:

To me, the probability for the first type of list would be a lot lower than the second type of list.

I think we may be getting tripped up with the word “intent.” If I use a filter for itunes, and itunes creates lists based on the criteria I have inputted, itunes does not have intentionality. One could say I gave it intentionality. But list 1 does not have characteristics of being haphazard and random.

The “actual” meaning is “selection influenced by no consideration other than chance?” If so, why is this the actual definition, while the dictionary definition is inferior or false?

To me, your definition (the one based on probability) and the dictionary definition are compatible—maybe even work together. My sense is that the reason you disagree is that you’re using infinity (or some huge number like it) when you’re thinking about the probabilities. With infinity as the range or parameter, then list 1 generated purely by chance is unremarkable (which makes sense to me). With an infinite amount of attempts, 1 in 30,240 becomes closer to 100%. But when the range or parameter is much, much smaller, then generating list 1 purely by chance is extremely remarkable—to the degree that one would question whether the process occurred purely by chance.

But the dictionary definition of randomness would be relevant when evaluating the actual sequence of the coin flips. For example, consider this sequence: HTTHHTTTHHHTTTTHHHHTTTTT….and let’s say this pattern persists up to a thousand coin flips. Couldn’t we rule out the possibility of this sequence because this sequence’s likelihood is so remote? Or we would question whether the coin was balanced/fair. Would you say these coin flips are random?

(I’ll try to respond to the last two examples that you gave later.)

(Con’t)

I can understand why you would call the draw random—and I agree. But suppose that instead of ten songs, the total number was 1,000. If I randomly drew the first five songs, the likelihood of me picking songs with numbers in the title, sequenced by the numbers value would be incredibly low. We would not expect that to happen—even if I attempt this many, many times. On the other hand, a random drawing would likely produce lists very similar to list 2—namely, lists of songs with no criteria linking them or principle dictating their order. There would likely to be no pattern or structure to the five songs. If I ran this experiment many, many times, I would expect the overwhelming majority to be like list 2.

I disagree. The lists above would be less random—i.e., less haphazard. The lists aren’t generated purely by chance. Now you’re putting in criteria that will narrow the choices. Aren’t the specific duration and location of the recordings external factors that are influencing the selection? What am I missing here?

My initial impulse is to say that either option is fine. I say this because the sample size for the total number of operations seems small and the incidences of bad outcomes seems negligible.

Now, let’s suppose the surgeon who never had a bad outcome performed the procedure many more times (e.g., a 100, 200 more, etc.) and still didn’t experience a bad outcome. At some point, wouldn’t you question the legitimacy of the 2% probability, if no bad outcome appeared? No bad outcomes occurring after 200 operation doesn’t seem so unusual. But after 500? 1,000? 2,000? (Note: I do not believe the possible attempts is equal to infinity. The actual number is way smaller than that.)

(Here’s a question: Suppose this surgeon performed 300 surgeries without a bad outcome. And say the other surgeon performed 300 and her bad outcomes hewed closely to the 2% number. In the next 50 surgeries, would you say they have an equal chance of a bad outcome occurring or does the former have a greater chance of a bad outcome occurring?)

I’ll get back to this after work (maybe; I’ve been having trouble sleeping lately and I need rest), but it sounds like you’re using the word “random” to describe a quality of the output, which I have been trying to steer you away from. When we say a list is random, we mean how it was assembled, not how it looks when it’s already been assembled.

But random assembly is linked to how something looks once it’s assembled…I feel like this is where we disagree. Your position seems to be there isn’t any linkage or relationship between random assembly and the nature of the final product. Generally, things derived from a truly random process look very different from things derived from a non-random process. Sometimes random processes can result in things that have some signs of non-randomness, (

edit insert), but overall and generally they do not. But it seems like you don’t believe this.Again, this seems like the crux of our difference. I’m saying that at a certain point, the results suggest the process is non-random; or we can expect results that is consistent with a random process. For example, a random process would not produce something like list 1. Only when we theorize or rely on infinity would the appearance of list 1 would seem acceptable and not extraordinary. Outside of theory, if we played itunes on shuffle for a 1,000 hours, how often do you think a list like list 1 would appear? Would it be more strange if it occurred or didn’t occur at all?

To me, the attempts are key to this. If we do something long enough, then all sorts of outcomes that don’t seem random can occur. But is this “long enough” purely theoretic or something real and empirical? If you don’t play itunes shuffle long enough, would you not expect a sequence like list 1? Would such a sequence be extraordinary?

With all due respect, the crux of our problem is really that you don’t seem to understand the concept of randomness. So if you’ll please indulge me once more, I’ll try again. It may take a while across a few posts this weekend, but I will also (hopefully) address all your points over these past few days.

Okay. I think you’re conflating the common usage of “random” and the mathematical meaning of “random.” Since the issue all these years has been about probability, I’m going to ask you to do your best to stick to the mathematical idea when you use the word.

For the concept of how the outcome looks, may I suggest “mixed,” “non-patterened,” or “without definition?” My recent illustration with the two iTunes song sequences demonstrates that a random generator can produce a seemingly unmixed sequence (my first list) as well as a seemingly mixed sequence (my second list).

I understand now that you took my second list as an example of a mixed list, an example of lists that LOOK like this list, when I meant for it to be this very specific list. So to answer that question: yes, you’re right that lists LIKE the second list are far, far, far, far, far more common and more likely to play in shuffle mode, but I was talking about this specific sequence of songs, which is

just as likelyto be played as the one-two-three list. I apologize for being unclear: I could have saved you a lot of reading and writing.Yet I somehow think you’re still going to disagree that the second list is just as unlikely as the first.

Yes, but that’s because a mixed product (or a product seemingly without pattern or form) is more likely, There is only ONE list that looks like the one-two-three list, while there are 30,000+ that look different. BUT there is also only one list that looks exactly like my second list. One out of 30,000+. So a random generator IS more likely to spit out something that looks mixed or without form. But the output, no matter what it looks like, is still random if it’s generated randomly. Please tell me you agree with this.

I do believe this, at least the way you mean it. But please understand that when you say “signs of randomness” it’s because you’re lumping all the mixed sequences together to compare them with the one umixed sequence. My sister may not look Japanese to you, because you’re comparing her to a great number of Japanese people who look many other ways combined. But it doesn’t change the fact that she’s genetically as Japanese as I am. A list may look structured or it may look mixed, but you can’t tell by looking at it if it was generated randomly or not. If you saw my playlist of songs exactly 2:30 in length, you would think they were randomly selected. They weren’t.

Not infinity. 30,240 times. Forget about hours, since song lengths differ. However long it takes to hear the ten-song playlist played 30,240 times.

But randomness says you probably wouldn’t have to wait that long. My one-two-three list could pop up the very first time you played the playlist on shuffle, or the tenth, or the 14,000th. It could also pop up the second and third times as well. Saying you’d have to wait until the 30,240th listen is to negate the concept of randomness.

Yes it would be extraordinary, but

so would my second listif it were to come out exactly this way.I don’t know if this is a true story because I didn’t click on it to see the details, but apparently somewhere in the world, there was a lottery whose numbers came up in sequence. Something like 44, 45, 46, 47, 48.

There was an uproar. People were sure there was a fix. But these people didn’t get that whatever their sequences were (say, 1, 27, 48, 62, 90) were equally unlikely. My heart kind of broke when I saw the reaction. And it’s one reason I’m going to stick with this until I’m sure that you at least understand the concept and are willfully disbelieving in it. But I’m not there yet. I think you’re rejecting the concept without getting it, and I can’t let this happen. You can reject science if you want, but I have to know you know what you’re rejecting.

(Mitchell, I wrote a response, but I’m holding off on posting it because I’m thinking it might be better to wait until you finish posting. If you’re finished for now, I’ll go ahead and post what I wrote.)

Edit:

I think I just realized something about your thinking–specifically regarding this specific remark:

(Note: You don’t have to respond, but I want to write this down here for myself, before I forget it.)

Gotta run.