Similar to batteries, millions of minor process improvements are the real story but make poor headlines compared with radical new tech.
Having said that, once/if they crack this, it opens up whole new frontiers for solar, which is already on track to utterly dominate electricity production without this factored in.
The three main new areas are low cost, low weight, and as a layer on existing cells to improve efficiency. They'll expand the frontiers of where solar makes sense in several directions.
Another interesting battery (and software dev) parallel. They're not just excited about the current record, but excited that they have a way to evaluate long term performance of new methods.
This is very similar to the work done by Jeff Dahn at Dalhousie (in partnership with Tesla). Once they had the method to measure results they could throw things at the wall and see what would still stick after a full service life without waiting 30 years.
His latest work is about how you build batteries to last 100 years.
90% of new generation capacity in the US annually is solar and wind. That number will only continue to climb; we're really constrained by how fast you can kick out PV panels from the fab and polysilicon supply. Adoption curve goes vertical before 2030 imho, with natural gas slowly being edged out as storage and more generation rolls out. Coal is dead, nuclear would be without subsidies (which we should provide as long as the reactors can run safely until an extended EOL date).
This is exciting stuff, but I’m still confused about how we solve for base load generation with solar and wind? What do you do when an entire region is sunless and windless for a week? My impression is that you can’t simply bring enough power from neighboring regions to meet demand. So do we have really big storage systems and/or massive overprovisioning? And if so, how does solar and wind compete with those costs factored in?
In the long run we can look forward to more exotic technologies like grid scale battery stations and small nuclear reactors. But in the short run the objective should be to use renewables when possible and fossil fuel only when necessary.
Power companies don't need to tear down oil and gas generators right away, they just need to scale back their use as they rely more on renewables. You mention two mitigations, expanding the geographic region and overprovisioning. Both of these raise the cost of transition, but not prohibitively.
> What do you do when an entire region is sunless and windless for a week
To me that sounds like a highly theoretical problem. I'm not a meteorologist or climate scientist, clearly, but I think a if you have a week-long thick cloud cover chances are probably comes with solid winds, no?
I’m not sure. Last winter Chicago went something like 40 days without sun IIRC, and I assume the same is true for much of the surrounding area. It doesn’t seem particularly unlikely that you could have many consecutive windless days as well, especially as climate change brings us these “once in a millennia” events every few years.
Moreover, as we transition homes from gas heating to heat pumps, this will increase demand on the grid and it will also increase the risks in case of failure (people freezing). Moreover, even without cloud cover, winters are snowy and the days are short and the light is indirect.
You just described why placing solar panels in Chicago is a bad idea.
Real world grid scale solar farms are placed in sunny areas that don't get regular cloud cover or even snow like that. Washington State and Chicago are terrible locations for solar, but moving electricity is surprisingly cheap. https://blog.solarenergymaps.com/2014/05/potential-solar-ene...
That said, cloudy days reduce but don't eliminate output, so as excess capacity is added the minimum output keeps increasing. A hypothetical green hydro/wind/solar grid would have significant excess solar capacity the same way the current grid adds redundant conventional powerplants.
Your link doesn’t support your claim that moving energy is surprisingly cheap. I’m of the impression that this isn’t the case (we can’t easily build transmission lines that can carry the necessary amount of power from the southwest to other parts of the country).
I didn't add a link on power transmission. Actually building stuff in the US has issues that have little to do with cost.
Individual UHVDC links are in the multiple GW range. Exact numbers depend on a host of factors but something like 1c/kWh per 1,000 miles for long range is a reasonable ballpark. (Upfront costs in the billions.) Though it’s much higher for underwater links, etc.
A link sending power 24/7/365 at maximum capacity is significantly cheaper, conversly geographic barriers can quickly increase prices. Also those costs aren't constant with distance the transmission lines cost far less than the equipment at either end.
Solar makes sense Chicago, and even in places in Alaska that genuinely experience weeks of darkness. An interesting current research area is how much vertical bifacial modules take advantage of reflections from snow on the ground, while shedding snowfall better and flattening the supply curve.
A grid entirely based on solar in those regions is less optimal, but no one is proposing such a thing.
There is a big difference between producing enough power on average and having power when you want it. Going 100% off grid is possible in Chicago but I doubt it's cost effective. Staying grid connected is offloading all the difficult parts to your electric company.
Yeah, that’s their job, the hard part. You don’t get a free return on equity. Orchestrate power across a distributed system of load, generators, and storage or let someone competent do it and earn that margin.
> “The new batteries will be spread across major centres in regional Queensland near communities that have significant rooftop solar generation because we know that’s where they will have the greatest overall benefit.
> “The network of the future will not only need to move electricity from where it is generated to where it’s going to be used, but also to when it’s consumed.”
And sure, utilities can buy power from consumers at a lower cost than they charge for power, but that only works until batteries decline to a cost point where consumers simply replace their utilities with batteries. Is that today? Not yet! But with the amount of battery manufacturing capacity spooling up for EVs and utility scale storage, that day will arrive.
Short answer is no, slightly longer answer is that even if you have "solid" winds they're often not enough to make up for the loss of solar. Additionally, cloudy and low-wind periods are common enough that without grid-stabilization tech we'd expect regular blackouts through many countries throughout the world.
All that said, I'm quite hopeful for grid stabilization tech coming online in the next decade.
Source: I've spent the last few years working on renewables R&D
Sure, so in my mind there are two kinds of grid stabilization, I'll call them short term and long term.
Roughly speaking, short term stabilization helps deal with fluctuations in demand over the course of a day. The Hornsdale Power Reserve[0] is a good example, it's basically a big bank of Lithium Ion batteries. This allows us to handle, say, everyone in Australia turning on their AC when they get home without needing blackouts to reduce load. At night, excess power generation can then refill the batteries.
Unfortunately this is not a workable solution if total demand per day exceeds total power production per day for more extended periods (think weeks). This is precisely the problem that can occur with solar or wind. Lithium Ion Batteries are not suitable for storing large amounts of charge over longer periods. We would instead like a battery that can perform longer term storage of power at an affordable price during winder and sunnier periods.
I'm currently excited by the approach taken by Form Energy[1] for what is called a Rust Battery. If they get it working this essentially allows us to trap and release energy through an extremely cheap and scalable chemical process. During a sunny summer you could potentially store enough excess power to get you through a very cloudy winter.
"Average temperature" doesn't mean much. Deterioration is concentrated in the periods of peak temperature, non-linearly. It makes it hard to express practical longevity with averages. You might be able to say something like, "it should last X years if it stays below T2, and spends no more than Y hours between T1 and T2." Then, actual lifetime at T2 provides an upper bound on that.
They don't say in the article, but perovskites are amenable to stacking with different wavelength capture characteristics, so can be built to capture above 40% of incident solar energy. That, and being radically cheaper to produce than Si cells, means there is great interest in this result.
Deterioration tends to be roughly linear on an Arrhenius scale, which is basically the scale which is exponential on the inverse temperature. This is because deterioration tends to be roughly linear on the activation energy of defects.
What that means in practice is that hardly anything happens below a certain temperature, but it goes all to hell in a flash, above that. The "certain temperature" for any particular sample depends on what you consider "a flash".
You would be pretty peeved if your solar panel went all to hell in a year, so you calibrate accordingly.
I guess these could be particularly well suited to colder / less sunny climates. Which could be a very good thing, given that in sunny climates existing technologies are already pretty much good enough.
Wikipedia has a neat graph comparing the efficiency of different types of solar cells including perovskite, which shows how fast it's increasing compared to silicon/multi-junction.
Not really. Materials and manufacturing costs for perovskite probably already beat silicon if anyone scaled up. It's efficiency and durability we're waiting on.
If you look at pricing a large solar farm you'll be surprised that the cost of modules is probably only a third of the total cost but when you reflect on land, cabling, inverters, grid connection, and installation then it's not really surprising. Given these costs, you've got an obvious incentive to buy higher efficiency modules for a small premium for savings on installation, mounting, and land use and a disincentive to buy cheap and not yet high efficiency perovskites.
This is why companies like OxfordPV are targeting tandem perovskite-silicon cells with efficiencies at 30%. Even if this costs 1.5 times as much as a 20% efficient silicon cell, it'd still bring down the price of your solar farm.
> …researchers returned to their labs to tend to their experiments in carefully coordinated shifts, Zhao noticed something odd in the data. One set of the devices still seemed to be operating near its peak efficiency.
The best monocrystalline Si cell manufacturers now have warranty of something like 83-84% of original STC rated output after 25 years, I don't think this tech is anywhere near it yet in terms of degradation over time.
“The results showed a device that would perform above 80 percent of its peak efficiency under continuous illumination for at least five years at an average temperature of 95 degrees Fahrenheit. Using standard conversion metrics, Loo said that’s the lab equivalent of 30 years of outdoor operation in an area like Princeton, NJ.”
Constant temperature testing ? Good luck.
Where i live the difference in temperature between night and day can reach 20 C.
A lot of things perform well at constant temperature.
35C outdoors doesn't sound like much. Illuminated areas can trivially reach +10-15C of the temperature in the shade.
Edit: Article goes in more details, but they did fast age it in 100C, so it should be durable. That said, I'm still doubtful it would perform as well in real world, especially since global warming would raise air temperature. Did the 30 year models account for rise in temperature?
Although I can't find the link to an article I read about the early development of PV cells using a purple glass filter I did manage to dig out a research paper that provides more or less the same regarding colored filters
Seems like great advance but a lot of questions still. Like how efficient vs silicon panels at the start?
If it makes manufacturing cheaper and see through panel potential is neat.
We have had solar a number of years now. With the arrival of an electric car our seeming decent sized array isn’t enough. Im not sure the current market is going well, our panel manufacturer (LG) stopped making them..
Could you elaborate? I didn't understand what this even means. Did someone named "Theodora D." in 1978 donate alot of money and now the chairs are named after them?
Having said that, once/if they crack this, it opens up whole new frontiers for solar, which is already on track to utterly dominate electricity production without this factored in.
The three main new areas are low cost, low weight, and as a layer on existing cells to improve efficiency. They'll expand the frontiers of where solar makes sense in several directions.
Another interesting battery (and software dev) parallel. They're not just excited about the current record, but excited that they have a way to evaluate long term performance of new methods.
This is very similar to the work done by Jeff Dahn at Dalhousie (in partnership with Tesla). Once they had the method to measure results they could throw things at the wall and see what would still stick after a full service life without waiting 30 years.
His latest work is about how you build batteries to last 100 years.
https://cleantechnica.com/2022/05/26/jeff-dahn-the-100-year-...