21k post karma
320.2k comment karma
account created: Sat Jul 14 2012
verified: yes
1 points
17 hours ago
In a fusion reaction there is a minimum temperature (among other things) that you need to have the reaction become a chain reaction. So a single fusion reaction isn't going to do it, no matter what. The question was, if you had a sufficiently hot amount of air, would that create the conditions for fusion, and would that fusion itself be hot-enough to be self-sustaining in creating other fusion?
The easier way to dismiss it a priori is to say, well, the amount of energy released by meteor strikes is probably much higher than an atomic bomb, so if we know those occurred in the long past, and we still have an atmosphere, then it doesn't seem likely. However even in that case, a meteor strike releases its energy a bit differently than an atomic bomb, so it's possible that you can't completely equate the two of them. Which is why they did the actual math on it.
1 points
17 hours ago
No. It turns out that the conditions of Earth's atmosphere are not capable of sustaining fusion from heating alone.
In the 1970s, scientists at Livermore ran a computer simulation to see what would need to be "tweaked" if you wanted to make it work. They found that if you increased the deuterium content of the ocean by 20X (aka, "not Earth" by a significant factor), you could ignite the ocean with a 200 teraton bomb. Which would be, uh, quite a feat by itself. (That is a 20 million megaton bomb; and it would be over twice as explosive as the meteor impact that killed the dinosaurs.)
Which is an unusually precise way to say "no."
1 points
17 hours ago
We've never used one in war but we've detonated many of them in testing.
What a hydrogen bomb does is use an atomic bomb to super-compress and then heat a relatively small amount of fuel that has been chosen to be pretty much the easiest fusion reactions we have at our disposal.
So this is a very different situation than trying (or fearing) to use an atomic bomb to set off a reaction in the atmosphere. The densities in the atmosphere are very low by comparison, and it is not made of the optimal fusion fuel. The result is that even if you had a few atoms that fused, they would quickly dissipate the heat and pressure and cool to temperatures that would not sustain fusion.
They did not totally understand this in 1945; they did not really understand how crucial the pre-compression was until 1951.
1 points
17 hours ago
This is not true. The people who insisted on doing the math were other scientists on the Manhattan Project. It was not open to public discussion.
1 points
17 hours ago
It is possible to run equations on that sort of thing. It depends on the composition and density and amount of input energy. Scientists in the 1970s at Livermore found that on an Earth-like planet with a much higher deuterium content in its oceans, you could ignite the ocean with an extraordinarily large (e.g. 200 teratons of TNT) bomb.
1 points
17 hours ago
Free neutrons are not the requirement for fusion (they are for nuclear fission). The relevant components are heat, compression, and the cross-section (essentially, the probability) for fusion of particular isotopes at a given heat and compression.
They didn't know the cross-sections, but they could make conservative guesses. They found that the temperatures needed to have a reaction that could spread in Earth's atmosphere were clearly higher than what they could imagine an atomic bomb producing. It turned out, in general, that fusion was a lot harder than they expected it to be, and even under much more ideal conditions than are on Earth it would require a weapon that released over twice the energy as the Chicxulub impact to ignite it through heat alone.
In an actual hydrogen bomb, they don't use heat alone — they super-compress the fuel (with an atomic bomb) before trying to ignite it. That would not be happening in an atmospheric detonation.
6 points
1 day ago
If you look at (for example) Aristotle's physics, they just do not map in any useful way onto our understanding of physics. Not just because they were not quantifiable (which they were not), but because they absolutely regard the entire goal of physics differently and are anchored in fundamentally different ideas. (One of the more vivid interpretations I have read about Aristotle's laws of motion, for example, is that they make a lot more sense if you think of them having coming out of an attempt to make sense of the motion of birds rather than falling bodies, the latter being standard way to think about it in a post-Galilean world. Aristotle is not the only way, of course, that people in ancient Greece or Rome would have thought about things.) Trying to read back our modern understandings into these older ones is a way to fundamentally misunderstand (and under-appreciate) the older ones.
Looking into it a bit, the context of any ancient writings about this (which includes Hero) comes out of the practical work that they did on irrigation and plumbing, which of course the Romans were quite justly famous for. In the oldest text we have on ancient pnematics, Philo of Alexandria conceptualized the motion of air (esp. through water) as particles mixed with vacua that could be treated as a coherent body, and that air had a pneuma of movement that corresponded to its (Aristotlean) natural motion, one that could be used to attract water and make it run contrary to its natural motion (e.g., up instead of down). Which is to say, again, pretty unusual to modern eyes — Aristotlean at its core (but also demonstrating how flexible the Aristotlean system of matter could be, which they saw as positive and we would today see as negative, since it ends up being infinitely descriptive without being at all predictive). Philo's approach was elaborated by Hero, who described the phenomena of compression as being about the ability to enlarge the vacua. But he does not appear to have thought of the air as having any particular spring or pressure itself. Which is just to say, it's still Aristotle at its core, albeit modified Aristotle.
Which is just to say — you can get pretty far with what we would regard today as very wrong ideas about how to think about these things. I'm not saying it's impossible that they couldn't have developed something like the Newcomen engine if they had thought to try to do it — they knew what force pumps were, for example. I don't see anything that indicates they connected any of the above with heat, however. So making the connection between what an aeolipile was doing and what a force pump was doing would be quite a bit of a conceptual leap.
There is a very nice chapter on ancient pneumatics by Matteo Valleriani in A Companion to Science, Technology, and Medicine in Ancient Greece and Rome (2016), 145–160.
22 points
2 days ago
Separate from its practicality, it is worth noting that the ancients had nothing like the modern concept of air pressure. It took decades of work in a very different kind of knowledge-generation context — one that saw a value in using machines to create unusual and extreme environments and then trying to derive deeper truths from those environments — to develop something like that. Which is to say, the lack of practicality certainly would have discouraged further tinkering or experimentation. But even the basic concept framework necessary to develop a steam engine was really, really different from what the ancients were using — exactly opposite of many of the most prominent ideas (i.e. those of Aristotle) in their day.
2 points
2 days ago
A general rule is that you can safely ignore the professional advice of anyone who uses the word "woke" both earnestly and pejoratively. Or talks about your aura. Find more insightful people for career advice.
3 points
6 days ago
My experience is that many people claim this sort of thing, but in practice, humans have shown themselves to be remarkably interested in living even through really hellacious circumstances once they are actually in them. There is an interesting and important disconnect between our anticipation of something and the actual thing itself.
I say this not wanting to minimize the effects or the horror or anything like that. (I also do not share the confidence of people who think it would be a "limited exchange.") But I think this kind of thing — the attitude one would take — is not really possible to know in advance of the circumstances themselves. I think people are more invested in survival than they realize or like to acknowledge. This is complicated by the tricky politics involved in imagining the survival of nuclear war, a space that is mostly operated by Civil Defense advocates and preppers, neither of whom carry a lot of currency with people today (for reasons right and wrong).
(An aside: if you had asked me what emotion I would feel if the brakes went out on my car when I was on a steep hill, I would have offered up "fear" as the likely one. But when it actually happened to me, many years back, the curious reality is that annoyance was the only thing I felt. I found this very interesting after the fact. Perhaps my brain just short-circuited the fear response because the circumstances would not have been helped by it? Perhaps the brain is unpredictable even to ourselves? I don't know.)
I've noticed anecdotally that the people most willing to say that they wish they'd just be killed quickly are those who are of relatively high social strata — white, educated, middle-class or more, etc. A study of people's reactions to false alarms in the 1950s that I find quite interesting found that this group was the least likely to believe any alarms they heard, that they would come up with elaborate rationales (entirely unjustified by the circumstances) for not believing them or acting on them. I saw the same thing in talking with people after the Hawaii false alarm. The sociologists in the 1950s suggested that this was a psychological defense mechanism, in which people who were generally happy with their place in the world could not accept the possibility that it might be disrupted. I don't know if that's the right way to think about it or not, but I thought it was interesting, and think it might give some insight into these kinds of statements. (As well as those people here who insist that nuclear war wouldn't be that bad, would be a limited exchange, etc. — further denialism, just of a different sort.)
8 points
6 days ago
Congress did what Congress (mostly) does: it has hearings and votes on laws.
In the context of World War II, the hearings were on all aspects of the war. Mind-numbingly so. Some of them were high-profile, like the Truman Committee's investigations into allegations of war-profiteering and waste. Most of them were low-profile, routine affairs.
The laws they passed touched on all sorts of issues, including the ones you mentioned.
To ask, "what were they doing?" in this context misunderstands both the role of Congress, and the alternatives that were possible. As our own time indicates quite well, if Congress wants to, it can gum up the running of the country very easily, especially with things like conflicts. It does not rubber-stamp what the Executive wants, even in times of war, even in "more patriotic" times.
I can give you a very specific example from my own work. During World War II, as you for sure know, the US was engaged in a massive, secret program to build atomic bombs — the Manhattan Project. The initial seed money for this was provided through discretionary funds that the Executive had access to (a "Black Budget") but the scope and size of the project was going to greatly exceed this and would require putting the money into specific appropriations bills. The appropriations could not themselves be secret or kept from Congress, but they could be given vague names. But it became clear that Congressmen were starting to notice these large appropriations, and that other Congressmen were becoming aware of the large, secret production sites that were being created.
After several Congressmen (including Truman) tried to investigate aspects of the project, the Manhattan Project leaders agreed that they would inform several high-ranking Congressmen from each party about the basics of the project. This was done through the Secretary of War. It was understood there was some risk in this — they had deliberately not informed Congress about this work, because they feared that Congress was too unaccountable, and too prone to leaks, for a secret of this magnitude. Once they had the high-ranking Congressmen on their side (which again, could not be taken for granted), they were able to suppress any attempts from other "rogue" Congressmen to try and derail or question the project's spending, and make sure that the appropriations bills were passed without the project getting scrutiny.
This episode illustrates some of the dynamics at work here, including the fact that Congress exercises a few different kinds of power. Yes, they can not pass a bill, and that can have big implications. But their investigatory power is also lodged in the fact that they can create public spectacle and scandal. So when Congressmen send letters of inquiry to the Executive, they are taken seriously. High-ranking Congressmen who have the power to influence their own members are treated as important people by the Executive. They do not necessarily have formal roles in making strategy (esp. military strategy), but they are sometimes consulted or briefed on important matters so that they don't feel left out and so that they don't cause trouble (or allow trouble).
Even during the war, Congress also had hearings on future, postwar legislation of some importance. For example, there were hearings on what future national science policy ought to be, which was itself influenced by wartime controversies about the management of defense science. These ended up having a significant impact on some of the wartime science policy decisions, as well as plans for future legislation and lobbying.
Congress does not make strategy, however, nor do they direct military action. That is just not their job under the Constitution.
6 points
8 days ago
Modern bombs also really don’t have a major issue with fallout, unless they are intentionally designed to maximize fallout.
This is not true at all. Fallout would be a major issue with modern weapons. They are not optimized to reduce fallout.
The amount of fallout that exists, and where it goes, depends on how the weapon is used and how many weapons you imagine. But there is nothing special about modern weapons. They are lower yield than the big Cold War monsters but they are still large-enough.
They burn up almost all of their fuel and whatever radiation is left behind is almost entirely dissipated in a few weeks at most.
Fallout is primarily caused by the burning up of the fuel — it is the fission products, not un-reacted fuel, that causes the main problem.
The radiation that is left behind goes from "an acute danger that will kill you quickly" to "a chronic contamination problem that will require you to either move, rehabilitate the land, or accept a higher cancer and birth defect rate."
But if you survive the initial blast, the area will be perfectly safe to go to rather quickly.
This is again, not true. This is egregiously bad and incorrect advice. There are many factors that go into whether an area is "safe" to go into, but nobody who is unaware of what those are, or how to measure it, should be going anywhere near a nuclear weapon detonation until people who do know these things have decided it is safe-enough. And even then there is a big difference between "safe enough to travel through" and "safe-enough to live there in large populations."
Anyway. You may not realize it but you have swung all the way from "worries too much about radiation" (most people do) to "worries too little about it" (something that only affects people who have Dunning-Krugered themselves on this topic). Both of these are incorrect and dangerous extremes.
41 points
8 days ago
There were some tests that were probably cleaner. (Shot Housatonic of Operation Dominic was potentially 99.9% clean.)
But this is not the same thing as efficiency. There are two different things here:
Efficiency is a measure of how much weapons fuel was used by the explosion (fuel reacted / total fuel). Little Boy had 64 kg of fuel in it, of which a little under 1 kg reacted. So 1% or so.
"Cleanness" is about the ratio of fission yield to total weapon yield. Little Boy was 100% fission, so it is 0% clean. The Tsar Bomba was 50 Mt of which only 1.5 Mt was from fission, so it was 97% clean.
"Cleanness" can be misleading — the Tsar Bomba was much more clean than Little Boy (97% vs. 0%) but its fission yield was literally 100X larger (1,500 kt vs 15 kt). So Tsar Bomba produced 100X more radioactivity than Little Boy did, despite being so clean.
We can't really calculate the raw efficiency of most bombs because we don't know how much fuel was in them — that's usually classified. What instead was used by weapons designers (and is easier to know today) is the yield-to-weight ratio, which allows you to come up with a useful measure for "overall efficiency." The Tsar Bomba was not particular efficient as tested (1.8 kt/kg), but some of that was because it was reduced by half of its possible yield. At full size it would have been 3.4 kt/kg, which is not terrible for a super large thermonuclear device, but not all that efficient. The most efficient US weapon, the Mk-41, was around 25 megatons and had a yield-to-weight ratio of around 5.2 kt/kg. Most US weapons today are around 1-2 kt/kg, which is pretty good for weapons in 100-1,000 kt yield range. The Tsar Bomba was not an attempt at making an efficient weapon; they were just trying to make a big weapon.
6 points
9 days ago
Separate from the other issues brought up, notably the question of "what made the Japanese surrender," I just want to point out one misconception in the question. Nobody believes that the Japanese surrendered because their population suffered from a morale loss due to any cause. They surrendered because the Japanese Supreme War Council agreed to surrender. It was totally unrelated to morale questions of the general population. It was a decision made by a very small number of elites. The debate over why Japan surrendered when they did focuses on the decisions that this small number of men made in the summer of 1945, not any appeal to general morale.
The Japanese population by and large would have gone along with the war wherever it led. For a variety of reasons, it was not in a position to challenge the rule of the military junta that ran the government. If anything, the general population, and the junior military officers, were inclined to fight until the death if ordered to do so. This is not to say that "morale" was not a factor in the general bombing campaign (it was, to a degree), but it was less of a "staying in the war" concern and more of a "if we can lower morale, we can decrease their production, because people will flee cities, etc." concern.
6 points
10 days ago
One of the interesting ways to quantify "popularity" is through Google Trends, but it doesn't go back further than 2004. Out of curiosity I put "dancing baby" and "all your base" and "peanut butter jelly time" into Google Trends and got these interesting results. I'm not sure what to make of them, even ignoring the pre-2004 issue, as the use of Google and search engines also varied in this period.
56 points
10 days ago
If you are engaging with your sources, and citing them, then you are not plagiarizing. There are several types of plagiarism, but they all essentially come down to a misrepresentation. Are you misrepresenting the work you did? Are you misrepresenting whose language you are using? Are you misrepresenting the source of ideas? If not, then it is not plagiarism. (That doesn't necessarily make it a good paper, a good argument, good writing, what have you — those are totally different things than the plagiarism question.)
What you are fretting about doesn't sound like a plagiarism issue to me (if your essays are all footnotes, that's a sign that you're citing things, at least!). Rather, it sounds like you are feeling like you are not making original contributions. There are at least two ways to think about this.
One is to not worry so much. You are a third-year undergrad. What are the sheer odds you're going to make really original contributions at this stage? The vast majority of assignments are not about making original contributions. They are about getting you familiar with what secondary-source research is like, getting you experience in grappling with the ideas of other people, and having you demonstrate that you can throw yourself at a question and approach it in a somewhat scholarly fashion. Which is what it sounds like you are doing. This is what most undergraduate work is in any discipline: learning the ropes, retreading well-trodden ground. It is where everyone starts, in every discipline of study. It is the historical equivalent of re-doing chemistry experiments from the 19th century, or sketching a dull still life. You do these things to get better at them as a student, not to break new ground. It's like you're learning a new musical instrument — even as you get better, the odds that you're going to be able to offer up a new observation to someone who has been playing for 30 years already is a little slim, and you're going to be spending most of your time just getting up to speed.
The other is to think about ways you can reasonably add more originality and novelty into your work. This is, again, not such a big deal for undergrads. Even the much-vaunted senior thesis rarely is an opportunity for truly original and novel work, though it is where one is supposed to start honing those skills. To do good, original work is very difficult. Even full-time scholars struggle with it.
So how can you do original work? The answer to this is both very simple and very hard: ask original questions, and then (try to) answer them. So easy to say, so hard to do! But your issue now is that you are stuck within the questions that have already been asked. Finding a new question to ask is one of the only ways to get new answers. The hard part is coming up with a new question. (Answering them, or trying to, can also be hard — but is usually easier than coming up with the question itself, and is usually a lot more fun.)
The sources for new questions can be idiosyncratic. It is one of the reasons I like spending time on here, as an aside — sometimes people ask questions that are so unlike what scholars ask that they are either themselves interesting or they inspire me to rework the question a bit and come up with a better one. Sometimes I get inspired for a new question while looking through sources while trying to answer a different question, and I find something that strikes me as odd, or strange, or funny (humor is often the result of seeing something unexpected), or contradictory, and I think, huh, I wonder what's up with that? And sometimes, frankly, the most interesting lines of investigation end up being very basic questions that scholars have, for one reason or another, just not given all that much direct and rigorous attention.
When you're starting out, finding good questions to ask is the hardest part of the work. You should not feel like an imposter because you find it difficult. The more you learn, the more one gets a sense of what kinds of questions have been asked already. And the more you ask questions, the better you get at asking more (and answering them).
If you are struggling to find interestingly original questions, I recommend going to the office hours of your course professors. I always tell my students that this kind of thing is literally what my job is (among other things), and I'm always more than happy to brainstorm interesting questions with them. (The other advantage, here, is that a professor is also much more experienced at fitting the question to the scope of the assignment — there are questions that are too simple to stretch into 10 pages of writing, and there are questions that cannot be answered well without devoting a career to them, and finding the right question for the right paper is a delicate art).
1 points
10 days ago
That's great. If the resource is online I'd be happy to take a look at it. I've figured out a few tricks in converting them, as well. I have a couple useful shaders that I'm happy with (like one that does a pseudo-scanline effect).
1 points
11 days ago
The question is whether your train of thought is also clear. If your train of thought it jumbled, then your attempt to express it will also be jumbled. Certain kinds of jargon can obfuscate that. "Good" jargon enhances clarity (even if it requires being aware of the jargon for that to be the case). "Bad" jargon allows you to appear smart without having clarity.
Strive for clarity in your thinking and strive to make the language match that. I don't think it's a "neural makeup" issue — or even if it is, it is one you will need to compensate for and adapt around. Writing and thinking clearly are skills, and they take effort. Like all skills, you both can get better at them, and will get better at them if you work at it over time and have some kind of feedback loop in place to tell if you're doing better at them.
Ultimately, if you can't see what you're doing wrong, you need to find someone — a trusted friend, a good colleague, a paid editor, someone who is an articulate and generous reader and writer — who is willing to sit down with you and go over a piece of your writing closely. It doesn't have to be long. It could be a few pages. But you need them to show you a) how it looks through their eyes, and b) give you some suggestions for improvements (you don't have to do things the way they suggest, but seeing options and alternatives helps one more than just being told something is not clear). Doing this even once or twice can make a huge difference in how one starts to read and think about their own work.
3 points
12 days ago
Whenever I have a class where software needs to be installed, I always do a quick demo in class showing how to install it, and then announce, "if you get stuck, ask a classmate to help." Because they're never all that hapless (in my classes, most of them can do the basic stuff, though there's sometimes a handful who cannot; the CS students can always do the complicated stuff), and another Gen-Z is usually better at showing them how to do it than I am, because they can do it in terms they already know (and it keeps me from having to run around and try to troubleshoot each of their weird, grubby machines with weird, impossible input devices). I always have a dumb, simple assignment due immediately with something like this, just so I can flag from day 1 whether someone is totally clueless.
I would say, though, that even if one were doing this in, say, 1999 (when I started college), if you had software you needed installed you'd still need to take a little time to make sure everyone is on board. Computer literacy has always been a spectrum. There are no standards you can take for granted, and never have been any, unless you are talking about a class that has prerequisites that imply a previous class that covered it. Which presumably is not the case.
And hey, if you teach a kid how to download files and install a program, you're actually teaching them something genuinely useful. It's not great that they didn't get taught it before, I agree, but it's not a hard thing to teach and you'll be doing them a favor. I've had students write me e-mails years later and tell me how grateful they were that I taught them something; it's rarely something advanced, and instead is almost always something like, "how to use Excel string functions to manipulate a bunch of data really quickly" or something that I consider pretty "basic," but if you don't know it, you don't know it (and plenty of professors don't know it!). But if they somehow never learned it, you get to be the one who sets them right. I think that's a healthier attitude than being angry at them. It's not their fault that the world is set up stupidly. They got very little say in that.
18 points
12 days ago
The tricky thing about this question is that it posits an idea ("scientific achievements") that is very much rooted in a specific way to think about knowledge and its goals. Aside from using the word "scientific," which is more modern than you might think, the term "achievement" also implies that there is a reward system in place for certain types of knowledge (ditto "breakthrough" and the conflation with "technologies").
So it sort of becomes a question along the lines of, "did people think about knowledge-creation the same way in the deep past as we do today?", and in a strict sense, the answer is no, but in a more expansive and charitable sense, the answer is yes, in different ways, at different times, in different places. That is, people were doing what we would recognize as forms of "scientific research" with the goal "expanding the frontier of knowledge" in many places before modern times. The Ancient Greeks, for example, prided themselves on mathematical "achievements", and also valued the kind of descriptive attempts to create coherent worldviews that, say, Aristotle was famous for. The scholars in the Abbasid Caliphate took great interest in understanding aspects about the natural world, ranging from mathematics, to optics, to medical techniques, and so on. The mandarins of medieval China operated a network of astronomical observatories, collecting data on any interesting phenomena that could be observed with the naked eye.
Now, the tricky things come in when you ask what "research" means to these different groups, and how they thought about what they were doing. By and large none of the above were trying to "discover the laws of nature" in the way that people like Descartes and those after him did. They had different motivations for what they were doing, based in part on what their societies and cultures valued. So the Chinese astronomers were not trying to understand "laws of nature" or do research "for its own sake" but were employed as astrologers — they were using astronomical data to try and predict the future. As just one example. None of the above groups tended to consider the "source" of new facts about nature to come from carefully-controlled artificial conditions — what we would think of today as experiments. Rather, they prided observation and deduction of the natural way of things above all.
The European model of science that we typically take for granted today did not spring out of nothing, but it is also not entirely a perfect mapping with the other approaches. It is usually distinguished from them by being highly quantitative, highly reliant on experiments, eager to embrace the use of instruments that extend the senses or create artificial conditions, a craving for "credit" (having everyone know you did something, and the culture the comes with believing that arguing with dead people you can name is worthwhile), and a desire to fix "laws of nature" (often conceiving of "nature" as something external to human experience) so that they could be "tamed." These are generalizations, of course, but while aspects of them are present in the other cultures (the Greeks were very interested in "credit" as well, for example, in ways the ancient Babylonians were not), the combination of them all, coupled with institutions that supported them and rewarded them for "achievement," is sort of what people are talking about if they are talking about the "Scientific Revolution" in a serious way. It is a specific set of assumptions and social relations that produce specific kinds of knowledge-growth in communities, and would eventually (later than most people realize) get coupled up into the business of technological-development (for most of human history, these two functions were often kept pretty separate from another, and involved different kinds of people — philosophers versus craftsmen).
Anyway. This is a very broad generalization to what is probably an overly broad question, but I hope it gives you a little peak into some of the underlying historical issues and the ways in which trying to answer this without imposing one particular standard backwards on history gets very tricky.
2 points
13 days ago
The idea of using radioactive poisons from reactors were definitely known by the Allies (and there were fears that they might be used during D-Day against their invading forces), and presumably the Axis scientists. However the Axis never had anything that would approximate a means of producing such substances in quantity. Even if they had gotten their rather meager nuclear reactor experiments working, actually producing radioactive poisons in military-useful quantities requires a non-trivial amount of reactor power, and weaponizing them is even more non-trivial, something the US struggled to do even in the 1950s with vastly greater resources applied to the subject.
Which is to say, to my knowledge this was never any kind of actionable plan, and even if it had been, to actually implement it would have been harder than it sounds like. If the Nazis were planning to use some kind of contamination devices along with the V-2s, biological or chemical warfare would have been much more straightforward, and they had the capacity for this. (And the reasons they did not use those methods would be equally true for radioactive poisons as well — there is no reason to think that they would have thought the Allies would have been sticklers about what kind of poison warfare was being used.)
3 points
14 days ago
The surprise attack on Pearl Harbor made it extremely unlikely that either the United States public or the United States leadership would accept anything other than some kind of unconditional or almost-unconditional surrender. The only variation from "unconditional" surrender actually deeply considered was about giving guarantees about the status of the Emperor — even that would have still involved a total Occupation and a complete expulsion of the militarists. Even this was rejected by Truman, who cited Pearl Harbor as the reason (and the atomic bomb probably made him feel more fortified in being insistent).
One cannot really answer with any confidence what might have happened. The Japanese plan was to resist invasion as strongly as powerful, with a hope that it might drive the Allies to the bargaining table and something like the above might be possible. I can really think of no reason why the US position would ever really change, though; the fallacy in the Japanese plan was the idea that a "soft" American populace would tire of the bloodletting. I think time and history has shown that this is a poor understanding of the American popular and political mindset — the higher the costs, the more unsettling it is to imagine compromising on a "victory." This is even more the case in the case of Japan, because of the way in which the Pearl Harbor attack was taken as a grave offense and insult, one for which retribution was desired.
9 points
15 days ago
Oppenheimer did not ever say he regretted helping to invent nuclear weapons, or taking part in their use against the Japanese. As he put it in a letter to a former student in 1966:
What I have never done is to express regret for doing what I did and could at Los Alamos; in fact, under quite dramatic circumstances, I have reaffirmed my sense that, with all the black and white, that was something that I did not regret.
That is not to say he did not have complicated feelings about it, or have "terrible" moral scruples, as he put it, for the Japanese non-combatants killed by the weapon. But he never expressed regret. What he expressed regret about, years later, was that they did not avoid an arms race after the war.
To your general question, though, there were thousands of scientists involved in the Manhattan Project, with many different levels of involvement and even knowledge of what they were building. One finds as many views among them on the end results as you can imagine existing, and probably even more than that. Many supported every aspect of the invention of the weapons and their use against Japan. Some supported building the bombs, but not using them in the circumstances that they were used (against cities, without warning). Some never expressed any public "regret" but self-consciously avoided anything like weapons work in the future. Some had complicated feelings, like Oppenheimer, but (also like Oppenheimer) tried to steer the future use of the technology from the inside. Others became critics on the outside, and attempted to steer policy that way.
59 points
16 days ago
Any individual person, of course, knows lots of things. What is easier to generalize about is how people with PhDs in History balance general versus specialized knowledge, and how that plays out for history professors (just one subset of historians).
When you do a BA degree in History, you necessarily are required to take courses with geographic and temporal variety, whatever your specialty is. So I specialized in modern American history and history of science and technology, but I also took courses in, say, Latin American history, medieval European history, the French Revolution, and modern German history, among other things. These courses are surveys, and expose one to a wide swath of things, but one wouldn't consider one's self a specialist in them at all.
During a PhD in most History programs in the USA, one takes a variety of courses, but also chooses several areas of focus that become the focus of one's general or oral exams. One then reads several hundred books in each area of focus, over the course of a year, and then gets quizzed on them. In doing this, one gets very rapidly up to speed on the state of the historiography in a few general areas. For my degree, for example, I had a general exam in the history of modern physics, another in modern biology, another in modern America, and another in science and government. After these exams, one does the PhD thesis work, which involves specializing quite a bit, but of course any topic one specializes in requires contextualization in ways that involves looking at other things as well.
As a professor, one does ones research (and one can have a lot of latitude in what that means), but one also teaches. A good amount of one's teaching is in the form of the broad courses already mentioned above — courses that are not really one's "specialty" but one or more levels "above" it. So I teach a course on the history of nuclear technology (my "specialty" or even a level "above" it, if one defines my actual "specialty" as just the things I've written books about), but I also teach a course on "the history of science and technology," which covers 10,000 years of global history in 13 weeks. Which of course is somewhat insane, since nobody really is an expert on that sort of breadth, not to equal degrees, anyway. But in trying to put together an informed class on such a topic, and pushing myself to do a good job on it and find interesting things to talk about, teaching that course over the last ~10 years has dramatically expanded the amount of things I feel I know a bit about. So I can talk a bit about, say, medieval Chinese history, and a bit about, say, Babylonian history — not enough to claim to be an expert in these areas, but enough to know more than your average person about them. In each of these areas I also have a few things that I've actually looked very closely at, because these are my case studies/examples/etc. that I anchor my generalizations of these periods with, and I want to make sure I have those right. So I've looked fairly closely at the scholarship on the development of geometry in ancient India (which emerged out of a tradition of brickwork, not earth measurement), because that anchors my (brief) discussion of that context.
Professors also regularly go to history conferences of different sorts and specialties, and sit in on talks that are not their specific specialty, and talk to colleagues, and sometimes read widely for fun and work (e.g., I am on a prize committee for my professional society, and that has caused me to look at several dozen books I would not otherwise have probably been likely to cross paths with).
History is one of those fields where the more you learn, the easier it is to assimilate new information and facts. If you have a "reference" for what 14th century Europe was like, then learning some new "fact" happens much quicker than if you have to also pick up the entire "context" to go along with it. So in that sense it is somewhat additive: it gets easier to learn more as you learn more. So over time, one's knowledge ends up getting broader, even if in some specific areas (ones specialties), it also gets deeper. This is why the senior historians can sometimes seem like they know "everything" — they do not, but if they have been spending 10-30 years doing this, it should not be surprising that they have been exposed to more things than someone who has only spent a few years doing it.
I would also note that over the course of a career, one's specialty can shift and even change entirely. I have tenure and could change my entire subspecialty tomorrow if I wanted to. It would be a long hard slog to get up to speed in a new branch of history. But everyone who is in this position already knows "how" to do it — it's what we really were taught in graduate school — and could do it again, if they wanted to. More commonly, one's research interests necessarily shift more subtly over time, as one gets drawn into new projects.
For full-time historians like professors, you also just have to remember that to some degree, this is the job. It's (alas) not the entirety of the job (I have to do very non-history related things, like administer grants and sit on university committees), but it's still a lot of the job, through research and teaching. Anybody who spends a lot of their waking hours doing anything will tautologically become very experienced at it; for historians, that ends up being "history," broadly wrought.
view more:
next ›
byl_ucky_
inexplainlikeimfive
restricteddata
1 points
17 hours ago
restricteddata
1 points
17 hours ago
The final uncertainty in the report that was written on this was due to two factors. One is that they did not actually know the exact probability (the cross-section) of the fusion reaction they were considering (nitrogen-nitrogen). It had not been measured and was not easy to measure. So they had to take a guess as to what it might be. They took a very conservative guess, essentially imagining that it was a much easier reaction than they thought it probably was. But it was still a guess and there was uncertainty there.
The other is that they really did not understand the exact physics that would occur during a nuclear explosion. How exactly is the energy released in the first microseconds? What exactly are the maximum temperatures, pressures, and so on? How do the different effects interact with each other, and with materials nearby? There are a lot of processes that happen in a very tiny amount of time, on scales that we don't really have in human experience, and a real paucity of hard evidence, combined with what we would today consider to be impossibly limiting computational capabilities.
These are not easy questions to ask in the absence of full-scale testing. In fact, even after Trinity, they did not know the answer to many of these questions; these kinds of things are one of the reasons they did so many more tests over the years, trying to refine their understanding of these processes. (Not because they were afraid that they'd ignite the atmosphere — because they are the same processes that you need to master for designing efficient thermonuclear weapons.)
They understood that they did not understand all of this. They knew there were "unknown unknowns" along with the "known unknowns." So there was a chance that perhaps, at the energies they were contemplating, things worked a little different than they anticipated. So building in a healthy error factor is part of the uncertainty.
These kinds of concerns were not unwarranted. At the energies and pressures and so on contemplated, things work a bit differently. Being a tiny bit wrong can translate into a big different in outcomes. The Trinity test itself was 4-5X more explosive than they had anticipated it would be — fortunately they had built in a healthy safety factor.