One of the largest planned wave energy projects in the world — the ARENA project in Australia — recently bit the dust. The 19 MW project — which was slated for development off the coast of Portland in Victoria, Australia — was once advertised as being the biggest wave energy project in development in the world, so the failure of the project represents a relatively significant blow to the industry, especially when you consider the fact that the reason for the project’s demise is that it wasn’t “commercially viable”.
The company behind the project — US-based Ocean Power Technologies (OPT) — recently fired its CEO, Charles Dunleavy, so it may be that there’s more to it than that. The project would have cost around $232 million to develop, with $66.5 million of that set to come from the Australian government, as per a previously made pledge.
The project has always had its doubters, though, so it’s failure isn’t a surprise to everyone, as RenewEconomy notes:
The decision by the Martin Ferguson’s Department of Energy in 2009 to pick the OPT project as a candidate for funding in 2009 raised eyebrows at the time, particularly because Australian based technologies were overlooked.
Even back in 2009, I wrote in the now defunct Greenchip column in The Australian newspaper that OPT had been accused of being unable to deliver on its own projects.
“OPT was criticised by Collins Stewart, its sponsoring broker on London’s Alternative Investment Market, last year because of delays and cost over-runs at a wave project in Spain. Collins Stewart broker Raymoned Greaves told The Times last July that OPT had a ‘total inability to deliver’ on projects. ‘The continued delays baffle us,’ he was quoted as saying.”
Humorously (in a way…), the ARENA project was being developed at the same site where an earlier version of OPT’s PowerBuoy technology was “scrapped after part of the machine snapped off while being towed into place by an ocean tug in 2002.” That project was funded by the Australian federal government and the local Victorian government.
Not really a good track record to date.
It’s currently unclear whether the money pledged by the Australian authorities will be made available to other wave energy projects or simply reabsorbed by the government. Only $5.6 million was delivered before the project failure, and that will be repaid according to reports.
James Ayre ‘s background is predominantly in geopolitics and history, but he has an obsessive interest in pretty much everything. After an early life spent in the Imperial Free City of Dortmund, James followed the river Ruhr to Cofbuokheim, where he attended the University of Astnide. And where he also briefly considered entering the coal mining business. He currently writes for a living, on a broad variety of subjects, ranging from science, to politics, to military history, to renewable energy. You can follow his work on Google+.
Britain hosts the European ocean energy test centre (EMEC – link) in the Orkney Islands. After decades of research and pilots, wave technology is still not mature enough for commercial deployment in any of its many variants, The ocean surface is a perfectly horrible environment for any machinery, and especially electrical. So best to keep on researching.
This does not apply to tidal, which is mature. In tidal flow setups, the generators are completely submerged, in a stable and manageable environment.
Wave/ocean power and geothermal have promised much over the years but delivered bugger all. As Calamity_Jean says, wind and solar are proven technologies with a lot of sites still untapped and further efficiency gains to come. As storage increases they will only look more attractive.
Tethering devices to the ocean floor or forcing water 3 km underground doesn’t really stack up. Wave energy should still be encouraged via private funding and appropriate tax breaks but let the developers prove it first, at their investors’ expense.
For the same $230 million they could be generating 2 GIGA watts, still in the ocean; then it would be a very attractive option for business.
Commercial viability was never a possibility. The pilot plant may have led to commercial viability in the future, presumably in locations that have a carbon price, but there was no way it was ever going to make money in a country where the wholesale price of electricity averages around 3 to 4 cents and is falling. So cancelling this project for not being commercially viable is like cancelling a Death Metal Concert for not having enough polka music. It was never supposed to have polka music in the first place!
My understanding is that this project had technical problems and that even in the long term and at scale it was unlikely to be commercially viable. It would seem that a competing Australian company Carnegie Wave Energy (CWE), which has also received some government support, but which initially was less favoured, is a better long term bet. There are also a couple of European designs which look promising. Wave energy is very concentrated compared to wind, so can have a smaller footprint. The CWE design is fully submerged so it has no visual impact and it can function where it is not a danger to (or in danger from) surface vessels.
I expect that soon we’ll have a few designs for wave power that clearly have promise and they will be trialed on a larger scale and modified and then a winner will emerge from among them. More or less the way wind power is now always a variation on a three blade design. But I don’t know if it will ever pay a significant part in energy generation. However, I am glad people are working on it. I would be surprised if wave power ends up a large part of the energy generation pie, but I have been surprised before.
Honestly, why bother with wave energy? Solar and wind are cheaper, easier to install, and nowhere near running out of good locations. Use part of the “wave energy” research money for improving batteries, and use the rest to subsidize wind and solar.
If humanity always waited for lame excuses like commercial viability we would still be riding horses.
Australia completely overbuilt their electrical system already.
Earth and Mercury are both rocky planets with iron cores, but Mercury’s interior differs from Earth’s in a way that explains why the planet has such a bizarre magnetic field, UCLA planetary physicists and colleagues report.
Measurements from NASA’s Messenger spacecraft have revealed that Mercury’s magnetic field is approximately three times stronger at its northern hemisphere than its southern one. In the current research, scientists led by Hao Cao, a UCLA postdoctoral scholar working in the laboratory of Christopher T. Russell, created a model to show how the dynamics of Mercury’s core contribute to this unusual phenomenon.
The magnetic fields that surround and shield many planets from the sun’s energy-charged particles differ widely in strength. While Earth’s is powerful, Jupiter’s is more than 12 times stronger, and Mercury has a rather weak magnetic field. Venus likely has none at all. The magnetic fields of Earth, Jupiter and Saturn show very little difference between the planets’ two hemispheres.
Within Earth’s core, iron turns from a liquid to a solid at the inner boundary of the planet’s liquid outer core; this results in a solid inner part and liquid outer part. The solid inner core is growing, and this growth provides the energy that generates Earth’s magnetic field. Many assumed, incorrectly, that Mercury would be similar.
“Hao’s breakthrough is in understanding how Mercury is different from the Earth so we could understand Mercury’s strongly hemispherical magnetic field,” said Russell, a co-author of the research and a professor in the UCLA College’s department of Earth, planetary and space sciences. “We had figured out how the Earth works, and Mercury is another terrestrial, rocky planet with an iron core, so we thought it would work the same way. But it’s not working the same way.”
Mercury’s peculiar magnetic field provides evidence that iron turns from a liquid to a solid at the core’s outer boundary, say the scientists, whose research currently appears online in the journal Geophysical Research Letters and will be published in an upcoming print edition.
“It’s like a snow storm in which the snow formed at the top of the cloud and middle of the cloud and the bottom of the cloud too,” said Russell. “Our study of Mercury’s magnetic field indicates iron is snowing throughout this fluid that is powering Mercury’s magnetic field.”
The research implies that planets have multiple ways of generating a magnetic field.
Hao and his colleagues conducted mathematical modeling of the processes that generate Mercury’s magnetic field. In creating the model, Hao considered many factors, including how fast Mercury rotates and the chemistry and complex motion of fluid inside the planet.
The cores of both Mercury and Earth contain light elements such as sulfur, in addition to iron; the presence of these light elements keeps the cores from being completely solid and “powers the active magnetic field–generation processes,” Hao said.
Hao’s model is consistent with data from Messenger and other research on Mercury and explains Mercury’s asymmetric magnetic field in its hemispheres. He said the first important step was to “abandon assumptions” that other scientists make.
“Planets are different from one another,” said Hao, whose research is funded by a NASA fellowship. “They all have their individual character.”
Co-authors include Jonathan Aurnou, professor of planetary science and geophysics in UCLA’s Department of Earth, Planetary and Space Sciences, and Johannes Wicht, a research scientist at Germany’s Max Planck Institute for Solar System Research.
Torrance, CA, July 28, 2014 –(PR.com)– Energy Muse, a leading seller of inspirational crystal jewelry and accessories, recently announced the addition of a brand new fertility bracelet to its product lineup – the Mother Goddess bracelet. This unique handcrafted bracelet honors all mothers and mothers to be with a Mother Child pendant. Each piece is made with Rainbow Moonstone and is cleansed and activated in Energy Muse’s special healing room.
“We are especially excited to be bringing this great new Mother Goddess bracelet to our customers,” a company spokesperson for Energy Muse said. “We feel this is the perfect addition to our lineup of high quality fertility and pregnancy inspirational jewelry.”
The company promises that each healing gemstone used in every Mother Goddess fertility bracelet is aligned directly with the energy of the moon to help increase fertility. “Our Rainbow Moonstones help each of our customers by aligning hormone production and the reproductive system to ensure better chances of conception,” said a company spokesperson. “It helps women bring forth their feminine power and works to strengthen the connection they have to their inner goddess.”
The Fertility line of jewelry has been a success for Energy Muse, recently seen worn by renowned model and actress Molly Sims. With the introduction of the new Mother Goddess bracelet, the company hopes to reach even more mothers and future mothers than ever before. “These fertility bracelets are all about being mindful about motherhood,” said a rep for Energy Muse. “The journey to motherhood is a very transformative one, and these new Mother Goddess bracelets reflect that perfectly.”
About Energy Muse Energy Muse is a conscious lifestyle brand, providing tools of empowerment, inspiration and hope for a diverse audience. Founded by Heather Askinosie and Timmi Jandro, two friends who share a love of beautiful jewelry and a belief in the healing power of crystals and a lifestyle of love and positive energy, Energy Muse has been selling beautiful, handmade jewelry pieces to celebrities and the general public for years. All Energy Muse pieces are guaranteed to bring complete satisfaction. Energy Muse has made it a top priority to educate and reconnect the world to the ancient wisdom and healing properties of crystals. Each piece of jewelry combines energy and intention which create a desired outcome. The company’s full line of inspirational jewelry includes energy jewelry and power bracelets as well as beautiful, timely pieces and wraps. Everyone can find something to love at Energy Muse.
The second quarter GDP figures for the United States are out. While the numbers are up, the long-term trend is not reassuring.
Global GDP has been growing at a declining rate since the Great Recession. While economists point to high energy costs, a decline in productivity, slower growth in the labor force, consumer and government debt, income inequality, and consumer aversion to spending, among other causes, there may be a more far-reaching trend, although still nascent, that explains some of the slowing growth of GDP. It’s called the zero marginal cost phenomenon, and it’s ushering a new economic system onto the world stage.
A Collaborative Commons is springing up alongside the conventional market and transforming the way we organize economic life – offering the possibility of dramatically increasing productivity, narrowing the income divide, democratizing the global economy and creating a more ecologically sustainable society. Although the Collaborative Commons is robust and growing faster than the traditional capitalist system, much of the economic activity does not show up in GDP figures.
What’s precipitating the great economic transformation is the unanticipated rise of the near zero marginal cost phenomenon. Private enterprises are continually seeking new technologies to increase productivity and reduce the marginal cost of producing and distributing goods and services so they can lower prices, win over consumers, and secure sufficient profit for their investors. (Marginal cost is the cost of producing additional units of a good or service, if fixed costs are not counted.) Economists never envisioned, however, a technology revolution that might unleash “extreme productivity,” bringing marginal costs to near zero, making information, energy, and many physical goods and services potentially nearly free, abundant, and no longer subject to market exchanges. That’s now beginning to happen.
The near zero marginal cost phenomenon wreaked havoc across the “information goods” industries over the past decade, as millions of consumers turned prosumers and began using the Internet to produce and share their own music via file sharing services, their own videos on YouTube, their own knowledge on Wikipedia, their own news on social media, and even their own free e-books on the World Wide Web.
Meanwhile, six million students are currently enrolled in free Massive Open Online Courses (MOOCs) that operate at near zero marginal cost and are taught by some of the most distinguished professors in the world, and receiving college credits. The near zero marginal cost phenomenon brought the music industry to its knees, shook the film industry, forced newspapers and magazines out of business, crippled the book publishing market, and forced universities to rethink their business model.
Economists acknowledge the disruptive impact that near zero marginal cost has had on the information goods industries but, until recently, have argued that the productivity advances made possible by the digital economy would not pass across the firewall from the virtual world to the brick-and-mortar economy of energy, and physical goods and services. That firewall has now been breached.
A powerful new technology revolution is evolving that will allow millions—and soon hundreds of millions—of prosumers to also make and share physical goods at near zero marginal cost. The Communication Internet is converging with a fledgling renewable Energy Internet and nascent automated Logistics and Transportation Internet, creating a super Internet of Things (IoT) platform for a Third Industrial Revolution that is going to fundamentally alter the global economy in the first half of the 21st century.
Billions of sensors are being attached to every device, appliance, machine, and contrivance, connecting every thing with every human being in a neural network that extends across the entire economy. Enterprises and prosumers will be able to connect to the Internet of Things, and use Big Data and analytics to develop predictive algorithms that can speed efficiency, increase productivity, reduce the use of energy and other resources, and lower the marginal cost of producing and distributing physical things to near zero on a digitalized Collaborative Commons, just as we’ve done in producing and sharing information goods on the Internet.
For example, businesses and homeowners are already producing and sharing their own solar- and wind-generated green electricity. Even before the fixed costs of installing the harvesting technologies are paid back—as little as 2 to 8 years—the marginal cost of generating the green electricity is zero. The sun and wind are free. Similarly, startup businesses and homeowners are producing and sharing 3D-printed products, often using locally available recycled materials, at near zero marginal cost.
Forty percent of the U.S. population is already actively engaged in the collaborative sharing economy. This means the race to a zero marginal cost society and the shift from exchange value in the marketplace to sharable value on the Collaborative Commons, although still embryonic, is going to increasingly shrink GDP.
As the marginal cost of producing goods and services moves toward near zero in sector after sector, profits will narrow and GDP growth will continue to slow. And, with more goods and services becoming nearly free, fewer purchases will be made in the marketplace, again reducing GDP. Think free music on Pandora, free news on the Huffington Post, free videos on YouTube, free knowledge via Google and Wikipedia, free e-books, and now free solar- and wind-generated green electricity, and nearly free products on home-based 3D printers.
Even those items still being purchased in the exchange economy are becoming fewer in number as more people redistribute previously purchased goods in the sharable economy, extending their usable lifecycle, with a concomitant loss in GDP. ThredUp, Freecycle, Sparkbox Toys, and other redistribution networks on the digitalized Collaborative Commons allow millions of people to share clothes, toys, sports equipment, and countless other items at low or near zero marginal cost.
Many consumers are also opting for access over ownership of goods, preferring to pay only for the time they use an item, which translates to less GDP. For example, several million individuals in the United States are now using car sharing services like Uber and Lyft. Each car share vehicle eliminates 15 personally owned cars, reducing GDP.
Also, consider the meteoric rise in homesharing services. Airbnb members who rent out their apartments and houses to visitors at near zero marginal cost eliminated 1 million hotel nights in New York City alone between mid-2012 and mid-2013, again cutting down GDP.
Meanwhile, as automation, robotics, and AI replace millions of employees, an increasing number of those displaced workers are sharing their talents and skills on the Collaborative Commons, using alternative social currencies as forms of payment. Since their work is not compensated in dollars, it’s not counted in the GDP.
Concurrently, as prosumers proliferate and produce and consume their own green electricity and ever-more sophisticated 3D-printed products for nearly free, it means less GDP.
The point is, while economic stagnation measured in terms of GDP may be occurring for various reasons, the headlong rush to a near zero marginal cost society could account for part of the sluggishness and will likely play an ever more impactful role in the future. What appears to conventional economists as economic stagnation may in fact be, to some small extent, a measure of the growing importance of a vibrant new economic paradigm that measures economic value in totally new ways.
Nowhere is the change in how we view the economy more apparent than in the growing global debate about how best to judge economic success. The conventional GDP metrics for measuring economic performance in the capitalist marketplace focus exclusively on itemizing the sum total of goods and services produced each year with no attempt to differentiate between negative and positive economic growth. An increase in expenditures for cleaning up toxic waste dumps, police protection and the expansion of prison facilities, military appropriations, and the like are all included in gross domestic product.
Today, the partial transformation of economic life from finance capital and the exchange of goods and services in markets to social capital and the sharing of goods and services in the Collaborative Commons is reshaping society’s thinking about how to evaluate economic performance. The European Union, the United Nations, and the Organization for Economic Co-operation and Development (OECD) have introduced new metrics for determining economic progress, emphasizing “quality of life” indicators rather than merely the quantity of economic output.
Social priorities, including educational attainment of the population, availability of health care services, infant mortality and life expectancy, the extent of environmental stewardship and sustainable development, protection of human rights, the degree of democratic participation in society, levels of volunteerism, the amount of leisure time available to the citizenry, the percentage of the population below the poverty level, and the equitable distribution of wealth are among the many new categories used by governments to evaluate the general economic welfare of society.
While producing and sharing virtual and physical goods at near zero marginal cost in the sharing economy on the Collaborative Commons vastly improves the economic quality life of millions of people, and decreases the amount of the earth’s resources needed to sustain a healthy society, it reduces the GDP at the same time. It’s likely that the GDP metric will decline in significance as an indicator of economic performance as millions of people shift at least part of their economic activity onto the mushrooming Collaborative Commons. By midcentury, quality of life indices on the Collaborative Commons are likely to be the primary litmus test for measuring economic well-being in a zero marginal cost society.
Sometimes it’s nice to reflect nostalgically on the last couple of decades of the 20th century. You know, the era of Madonna and Duran Duran, Cheers and The X-Files, McGwire and Sosa, the Macarena, and superstring theory.
Sadly, most of those are now just memories, although I guess Madonna is still around. And actually so is superstring theory. You just don’t hear much about it these days. It’s still an active field of physics research, but the progress is technical. Nothing newsworthy.
But that could change at any time. Just last week, a new paper by Itzhak Bars suggested the possibility of a major superstring accomplishment with the potential to make strings respectable again.
Bars is a respected physicist at the University of Southern California in Los Angeles. He is known for some way-out-there ideas, like the notion that physics would be better off with two dimensions of time. Now he and USC colleague Dmitry Rychkov have proposed a connection between quantum mechanics and superstring theory. If their idea pans out, it could boost an aging theory into the list of scientific topics trending on Twitter.
News traveled more slowly in superstring theory’s early days. It started out merely as “string” theory, an attempt to explain the strong force that held atomic nuclei together. In the 1970s, though, it merged with the concept of supersymmetry. That merger spawned the notion that all of nature’s fundamental particles could be imagined as different vibration modes of one primordial type of object, a supertiny string. In this case, “string” just meant that it was one-dimensional, unlike the zero-dimensional “point” particles of standard theory.
By the mid-1980s the new superstring theory had emerged as the hottest theoretical breakthrough since quantum mechanics, mainly because it seemed to show a way that quantum mechanics itself could be merged with Einstein’s general relativity. For the first time, physicists had a unified theory that could accommodate both Einstein’s explanation for gravity and the quantum explanation for particles and other forces.
There were some snags in the strings, though. For one thing, to make the math work you needed several additional dimensions of space. Instead of a four-dimensional universe — three space, one time — you needed something like 10 or 11. But that was a mere detail.
Another early problem was that there seemed to be more than one superstring theory — different mathematical versions of the basic idea. That seemed odd. If superstring theory offered the one true final theory describing all of fundamental physics, how could there be more than one? But in 1995 Edward Witten showed that the various string theories were all just different views of a deeper theory — he called it M theory. String theory merely offered various different descriptions of the same subatomic elephant.
There was one other little hitch, too. Nobody knew how to test to see whether superstring theory (or M theory) was actually correct. Writing down a theory is one thing, figuring out whether it accurately describes nature is something else.
Please do not, however, let anybody tell you that superstrings are therefore not scientific. Sure, it’s hard to imagine ever detecting them directly with any technology that politicians would be willing to pay for. Superstrings are too small to probe with any atom smasher you could imagine building on Earth. But just as the existence of atoms could be deduced from indirect effects (Brownian motion, for instance), there are ways that superstrings could leave signs in nature that scientists could eventually decipher.
For now, though, superstring theory lacks the sort of dramatic demonstration that propels radical theories into prominence, such as Einstein’s famous precise prediction of how much starlight would be deflected when passing by the sun as measured during a solar eclipse. But perhaps some different sort of accomplishment could elevate superstring’s status. Such as one proposed in the new paper by Bars and Rychkov.
They point out that when string theory was developed, everybody assumed quantum mechanics was correct (which it is) and designed string theory to observe the quantum rules. But suppose, just for the fun of it, that you tried to build string theory without any quantum restrictions. Bars and Rychkov work out how to do that in a simplified version of string theory, specifically a version when the strings are “open” (not closed to make a loop).
In string theory, the common interactions between fundamental particles that physicists study are described as strings joining or splitting. In analyzing the details of the splitting and joining process, Bars and Rychkov found that the basic rules of quantum mechanics naturally emerge. In other words, you don’t need to assume quantum mechanics to find string theory — it’s the physics of strings that makes the world quantum mechanical.
For decades, explaining why nature observes the mysterious rules of quantum physics has perplexed physicists everywhere. Nobody could explain why those rules worked. The connection between string physics and the quantum math may now lead the way to an answer.
“This link suggests that there is a deeper physical phenomenon, namely string interactions, underlying the usual … rules of quantum mechanics, thus providing a possible explanation for where they come from,” Bars and Rychkov write. “If string or M-theory theory really underlies all physics, it seems that the door has been opened to an explanation of the origins of quantum mechanics from physical processes.”
Of course, so far the analysis is just for a “toy model” of string interactions, involving just two particles. And nobody knows for sure whether M theory really does underlie all of physics. But if a theory comes along that explains why quantum mechanics is right, it’s a theory worth taking seriously.
“If this view holds up … then the concept we discussed here for string interactions being the source for quantum mechanics would boost the credibility of string theory as a fundamental theory,” Bars and Rychkov assert.
On top of all that, the string-quantum connection suggests an intriguing insight into the nature of reality. Quantum physics is notorious for implying the existence of multiple realities, as articulated in the “many worlds” interpretation of quantum mechanics. Superstring theory has also annoyed many physicists by forecasting the existence of a huge “landscape” of different vacuum states, essentially a multiverse comprising multiple universes with a wide range of physical properties (many not suitable for life, but at least one that is). If string interactions really are responsible for the rules of quantum physics, maybe there’s some connection between the multiple quantum realities and the superstring landscape. For fans of the late 20th century, it seems like an idea worth exploring.
Of all the people who are partially responsible for Friday’s much-anticipated Marvel movie Guardians of the Galaxy—studio head Kevin Feige, co-stars Chris Pratt and Zoe Saldana, Jack Kirby and Jim Starlin for pioneering Marvel’s “cosmic” stories—the most surprising one might be scientist Richard Feynman. Not that the celebrated physicist known for his work in the fields of quantum mechanics and nanotechnology contributed directly to the movie in any way (having died in 1988, that would’ve been unlikely), but without Feynman, GotG screenwriter Nicole Perlman might never have gotten involved in writing in the first place.
“Science was my gateway drug,” Perlman says, “so I tried to see if I could apply my interest in science stories to actual science—and discovered that the nitty gritty is a lot less exciting than the stories.”
Growing up in Boulder, Colorado, which is home to a number of aerospace companies, Perlman’s father’s weekly book club was filled with … well, there’s no other way to put it: “A lot of people in that book club were rocket scientists.” And many of them were former students of Feynman from his time at Caltech. “I grew up hearing stories about him, and just being immersed in this pro-science-fiction, pro-science background,” she says.
Nicole Perlman. Ben Rasmussen/WIRED
As a teenager, Perlman idolized the physicist the way some of her peers admired Tiger Beat cover boys. “He was my childhood crush object,” she confesses, laughing. “I had printed out pictures of Feynman from the Caltech website when I was in high school. When my friends had pictures of Keanu Reeves on their wall, I had pictures of a dead physicist.”
But when she was 16, her father gave her a biography of the scientist, and everything changed. “I loved the way that he could explain these incredible mysteries about the universe; there was something about the way in which he made it seem like you could explain this to anybody in the world, you just needed the right communicator,” she said. “That was what seemed miraculous about it. These amazing, lofty ideas, weren’t walled off from not-particularly-brilliant high school students like myself. It was inspiring.”
Literally inspiring, as it turns out. In college, Perlman’s first screenplay, Challenger, was about Feynman’s time on the Rogers Commission investigating the Challenger disaster, something she describes as “a love letter” to the scientist. From there, she went on to write more screenplays based on real-life scientists and scientific exploration, including one about Neal Armstrong, which eventually brought her to the attention of Marvel Studios, where she was part of the shortlived 2009 writers program.
The program was a chance to “bridge the gap” between what she’d been doing and what she’d been wanting to do as a screenwriter, she says, adding that she “had found a little bit of resistance in Hollywood—especially as a female writer, it gets harder to do larger projects.” As part of the Marvel Writers Program, Perlman was able to choose a property to develop; the one she chose is now on track to be one of the biggest August openings in movie history.
“They gave me an option of about half a dozen different properties they thought would make a good movie,” she says. “I chose Guardians because it was space-based, and I thought I could have a lot of fun with it.”
And indeed, Guardians is one of the most fun (and funny) films to come out of Marvel Studios yet. But the difference between Guardians of the Galaxy back in 2009 and today, of course, is that only hardcore Marvel fans were familiar with the characters back then. “For two and a half years, I’d tell people I was working on a Marvel screenplay called Guardians of the Galaxy, and everybody said, ‘what’s that? I’ve never heard of it,’” Perlman remembers. “I wish I could go back to myself in 2009 and say ‘This is going to happen! People are going to know what you’re working on!’”
One drawback of choosing a relatively obscure property was the research, although Perlman remembers enjoying that aspect more than one would expect. For months she would come home with binders full of comics to read for homework. “Until I started working at Marvel, I didn’t realize just how intricate the backstories of all the characters are,” she says. “It’s this feeling of discovering new worlds you didn’t know about and getting sucked in.”
Another problem was going from the realism of her earlier projects to what she calls the “elevated” science of superhero movies. “I value real science, so I find it hard to release my stranglehold on that,” she says. “I go ‘But how would one blow up the moon?’ I want to know exactly how that would work! It was a little bit hard for me to step away from that initially.” Even today, she’s aware of the ways in which her desire for factual fidelity runs up against the demands of summer blockbuster movie-making.
Following the buzz surrounding Guardians, Perlman already has a number of projects lined up including a YA novel adapatation for DreamWorks, movies at Fox and Disney, and a project with Cirque du Soleil. Her background in both science and “big, funny, colorful wacky” science-fiction has allowed her to escape being pigeonholed.
“I think people are always trying to find where you ‘fit in,’” Perlman says. “I’ve been allowed the opportunity to leapfrog from one genre to another because they all happen to share a lot of imagination, a lot of grounded characters, and a fantastical world.”
In the long term, her dream is “to sell an original project that is expansive, that has a huge world and big concepts, but not pre-existing material,” she says, but before she gets there, there’s something almost as exciting to tackle: getting her Feynman screenplay turned into a movie. The movie just got re-optioned—”which is kind of amazing,” she says, “considering the number of times it’s been set up. It’s been this thing that keeps coming back to life, and hopefully this’ll be the time it actually gets made. “We’ll see what happens.”
À l’occasion de l’été, Boulevard Voltaire vous offre cinq extraits de L’affaire Halimi : du crime crapuleux au meurtre antisémite, de Gilles Antonowicz. Cliquez sur la couverture du livre pour l’acheter.
Philippe Bilger entame son réquisitoire à la plus mauvaise heure qui soit, juste après le déjeuner, par une chaleur étouffante. Il va requérir à son habitude, c’est-à-dire férocement, mais avec beaucoup d’humanité… À ses yeux, il n’est pas nécessaire de faire référence à la pire période de notre histoire pour parler du calvaire d’Ilan Halimi. Il ne prend pas clairement position sur le caractère antisémite ou non du crime, mais insiste sur sa dimension incontestablement première : « On s’ennuie ensemble, on rêve ensemble. On veut de l’argent. À tout prix. On veut devenir comme les autres que l’on déteste. »
Aussitôt le verdict rendu, Francis Szpiner s’avance dans le hall où se tendent micros et caméras. De sa voix de baryton, il s’indigne de la « particulière indulgence » des juges. (…) « J’invite, dit-il, le Garde des sceaux, Michèle Alliot-Marie, à demander au parquet général de faire appel de cette décision. » C’est tout juste s’il prend acte du fait que Fofana vient d’être condamné à la peine la plus lourde prévue par notre arsenal répressif. Il oublie totalement la place qu’occupe la partie civile dans l’espace judiciaire. Celle-ci n’est pas partie au procès pour requérir une peine mais pour obtenir une réparation. Le quantum de la peine n’est pas de son ressort. Ce n’est pas la victime qui demande la punition, c’est la société. La justice n’est pas ce que Stephen Hecquet nommait « la forme endimanchée de la vengeance ». Les peines prononcées par le peuple souverain ne regardent pas les victimes. C’est la raison pour laquelle les textes ne reconnaissent pas à cette dernière la possibilité de faire appel.
Szpiner feint également d’ignorer le principe de l’individualisation des peines. Les juges sont tenus d’apprécier le degré de responsabilité de chacun des accusés suivant leur participation aux faits et leur personnalité. C’est précisément ce que Bilger a fait dans son réquisitoire et les jurés dans leur verdict. Que voulait donc la partie civile ? La condamnation à perpétuité de tous les accusés alors même qu’aucun d’eux, à l’exception de Fofana, n’était accusé du meurtre d’Ilan Halimi ni même d’être complice de ce meurtre ?
Imprimés avant même que le verdict ne soit connu, des tracts signés d’un surprenant « Comité pour un jugement à la hauteur du meurtre » circulent dans le Palais. Ils appellent à un rassemblement de protestation devant la Chancellerie. (…) Conditionnée depuis des mois, abreuvée d’informations partielles et inexactes et continuant à l’être, l’opinion s’émeut. (…) Le lundi 13 juillet en fin de matinée, sur le perron de l’Élysée, à la sortie du conseil des ministres, sur l’insupportable ton martial qu’elle affectionne, Michèle Alliot-Marie ordonne au parquet de « faire appel des condamnations qui ont été inférieures aux réquisitions de l’avocat général ». Personne ne saura jamais ce qui peut lui permettre de penser que son opinion vaut mieux et plus que celle des neuf jurés et des trois magistrats qui ont assisté aux débats pendant onze semaines et ont délibéré pendant trois jours pour en arriver à fixer des peines égales ou très légèrement inférieures aux réquisitions du parquet. Une chose est certaine et Le Nouvel Observateur ne s’y trompe pas en titrant : « Il y aura un second procès. Ainsi en a décidé Nicolas Sarkozy. La justice n’a plus qu’à s’exécuter. »