One of my little hobby horses as an engineer is the optimism of builders. You see, most builders build something and then leave site with it looking fine. So they never see the long-term problems that emerge from certain poor practices. Another way of expressing this, is that builders typically only build a house once. When they are engaged to repair prior bad construction, the problems are usually obvious, but the cause less so. Well, engineers are not exempt from this problem. We are better trained to examine the causes of problems and better able to make predictions about the long-term behaviour of various construction techniques. Even so, we do not usually revisit our work anymore than a builder does.
Seismic design represents an area where Engineers have often had to make design decisions based more on theory than on practical experience, because Earthquakes are comparatively rare. When we design systems to resist gravity loads, they are tested immediately because gravity is always turned on. Consequently these systems are subject to an immediate feedback – if a floor feels bouncy during construction, you can spot that quickly and fix it. Seismic systems have historically been devised and then left dormant and ignored until tested by an earthquake.
That situation began to change in 2006 when the Building Act was changed to create the term “Potentially Earthquake Prone”, meaning that the building could suffer unacceptable damage in a prescribed earthquake. The Act required that buildings have a certain minimum level of compliance with modern standards. This began a process where old buildings were reexamined with fresh eyes, and that process was hugely accelerated by the Canterbury Earthquakes. Wellington City Council needs some really seriously good kudos for proactively engaging with the intent of the legislation 6 years ago and instituting a very effective regime of identifying problem buildings.
Seismic strengthening didn’t begin in 2006 – it had been going on back a long while. It’s not common knowledge, but Wellington Town Hall was originally built with a large clocktower, which was demolished in the 1920s or 1930s (I forget exactly when) because of the perceived risk it posed in an earthquake. However, prior to 2006, it was not exactly clear what the standard for strengthening should be. Different engineers adopted different standards based largely on the financial pressures faced by the client. The most common strengthening levels were 100% of the 1965 code and 2/3 of the 1992 code.
This is patently silly. If you wanted to buy a car in 2006 you wouldn’t have accepted one from 1965, but that is essentially what we were accepting for the strength of buildings. It was allowed because of the concept of “practicability”: that it is impractical to strengthen some buildings beyond a certain point, and it is impractical to demolish and replace all non-compliant buildings en masse. There is an argument to be made, that the Canterbury Earthquake has done just that – forced the elimination from the market of buildings that aren’t adequate – and we can see the problems that has created in terms of shortages of certain kinds of buildings.
Another conceptual shift which occured was the change from designing a building to be sufficiently strong to designing it to have sufficient resiliance. I have posted in the past about the concept of “capacity design“, so even many of the buildings which “failed” in Canterbury actually performed as intended, though sometimes by incredibly narrow margins. You can have buildings that are nominally “Potentially Earthquake Prone”, but which actually pose a minimal threat to anyone. Timber houses are often in this category, where they can be unrepairable after an earthquake, but did not “fail” in a way that threatened the inhabitants’ lives.
Each time the standard iterates, it makes seismic design more onerous – there is a straightforward, but non-linear, relationship between the demand on buildings and time. Full compliance with the 1992 standard is only something like 75% compliance with the 2005 standard. We remain somewhat comfortable with this as long as the principles of capacity design were followed because capacity design reduces the importance of the earthquake’s magnitude.
Simultaneous with increased loads are better and better models of the structures, which reveal problems with old designs. One very clear example of the problem is covered by the recent DBH advisory on stairs. This advisory is a double body-blow to old stair egress design, because it points out the inadequacies of detailing, while increasing the demands on those details. Again we must give credit to the WCC for acting swiftly and sensibly and sending all possibly-affected building owners a notice of this advisory and requiring that they check the compliance of their stairs.
Consequently, it is always severely nerve-wracking to look back at your old work. I reviewed one strengthening job only about 10 years old where a building was strengthenned to the accepted standard of the time, but whose strength against modern standards is firmly back to “Potentially Earthquake Prone”.
Trying to look forward into the future is even more nerve-wracking in some ways. To some extent, the market is helping engineers by placing higher value on Earthquake performance, but there is still a significant push-back from people unwilling or unable to pay for the work that we consider necessary. We are still against that limit of “practicability” and so despite everything, we are still involved in projects where we are strengthening a building at 25%NBS to 34%NBS, to get it off the register of “Potentially Earthquake Prone” buildings.
Even those jobs where we aim for full compliance are under suspicion. It’s a virtual certainty that when we come back in another 10 years, we will once again be downgrading expected performance and increasing expected loads. This means that the long-term economic value of some of this strengthening work is questionable. Strengthening to 34%NBS is possibly essentially a placebo.
It is even worse than that, because on the horizon, intruding into the cutting-edge design offices, is a completely different approach to the seismic response of buildings. At the moment, we look at ground accelerations and work through fundamental equation: Force = Mass * Acceleration. We guess at ground accelerations and consequently a force. Forces we understand. The new design approach is based around displacements, and forces only enter the picture at the periphery, such as ensuring that the building is adequately fastened to the ground so it doesn’t slide away or topple over.
Preliminary comparisons of the cutting-edge displacement-based design seem to generally result in “more economical” designs. Member sizes tend to be smaller. The worry is two-fold. First, that these displacement-based designs could turn out to perform worse than we think, because they are largely untested by real-world earthquakes. There is a chance that, like other cutting-edge methods of analysis, there are problems with the approach and so the current crop of buildings might experience real problems. The second worry is that when reevaluating existing or old buildings, we find that irregularities in the building system are found, rendering existing buildings worse off than we think in significant ways. It’s not quite a Catch-22, but it is certainly an interesting area.
For me the question ultimately devolves down not to which approach is taken for your number-crunching, or even what the values of those numbers are, so much as whether the basic concept for what you’re doing makes sense. For example, are all the elements in your building roughly the same strength and stiffness? Do they have large displacement capabilities? Is the strength of the building distributed throughout the building? Without using any numbers at all, you can often look at a building and realize that it’s going to perform well or perform badly, and so I don’t tend to lose sleep over the buildings with low nominal compliance but a basically-good system. I worry about the buildings where someone got fancy and pulled a bunch of numbers together to prove it was all okay.
In some ways, I feel that’s what we need to go back to: having a sensible basic system to start with. Therefore, while the quantitative goal-posts seem to be perpetually receding into the distance, qualitatively we are probably not too worse off than we used to be.