I used to be structural engineer, and while things may have changed in the past 8 years since I left, I doubt it.
The problem with structural engineering is incentives. It is one of the reasons that I left. Most structural engineering companies are filled with conservative, boring engineers that prefer to look up pre-designed segments and don't make full use of the steel design handbook or building codes.
For example, in Canada if you have a non-load bearing brick outer face (most brick buildings in Canada) you're allowed to reduce the wind load by 10%. I was the only person I knew that knew this because I actually read the steel, concrete, and wood design handbooks front to back while I made notes. Furthermore almost nobody has read the building code "just because" they might hop to a section here or there when they need it, but they're generally not going to just sit down and read the thing.
So when I would design buildings I would be able to take advantage of a lot more things than most people. This lead to my buildings being cheaper / easier to build, which of course lead to our engineering fees looking like a larger portion of the job.
The problem with reinforced concrete is the same. Engineers have no financial incentive to make alterations to their designs to make the buildings last longer. It is almost trivial to make sure steel wont rust (or to double or triple a buildings life) but it makes construction costs go up 0.01% and makes engineerings fees go up 0.1% so nobody does it. Regulators are to blame too. There are amazing concretes (Ultra High Performance Concretes) we should be using in our buildings that completely lack even needing steel because they are so ductile and strong (MPa 200 for the one I was familiar with, Ductal by Lafarge), but it's impossible to use in construction in Canada because the code is so rigid.
> Most structural engineering companies are filled with conservative, boring engineers that prefer to look up pre-designed segments and don't make full use of the steel design handbook or building codes.
I dare say the same is true of software engineering. I, nominally a backend engineer, know (and apply) more about HTTP than most front-end devs and architects I've met, simply because I sat down one day and read the HTTP spec. (It's not a difficult read!)
A few years back I was building a web server from scratch for my own quirky needs (using, of all things, C and Scheme). It required understanding the HTTP protocol, and I agree the RFCs are not all that hard to read and learning the details in order to apply them.
However what I eventually found out is that the HTTP "rules" were not faithfully followed by many implementations. For example, extra care taken to make sure HTTP headers were correctly parsed just caused headaches. The trouble was that headers received from many origins were "malformed" despite specs saying what a header "MUST" contain or what characters are not allowed.
I know servers are supposed to be "tolerant" of non-compliant clients (and vice-versa), and realistically there's little choice but to go along with "loose" compliance. I've often wondered to what extent that reality contributes to less than optimum security that's so often been an issue.
How many of us actually read the documentation and the source for the systems we use? All the options and flags for jq, wget, socat, ssh, rsync, etc. I am trying to spend about 5 minutes a day just reading man pages, esp about things I THINK I know but actually don't.
In my personal experience, the best engineers I've encountered (and learned from) have understood every system, subsystem, and interaction, all the way down to the most fundamental foundational level. And this understanding allows them to make the best decisions (because they're equipped with the best information). This knowledge doesn't come by sitting down and studying how CPU architecture works when you're building a web application. But it does come from diving as deep as is required for any given task. So maybe if you're dealing with a web app performance bug and you have to crack open Chrome source code, trace it down to something that is compute-intensive, learn whatever C++ code is involved, understand how it utilizes the CPU, and learn about the specific architecture you're using that exhibits the problem, then you have ultimately obtained a significant depth and breadth of information, but at the end of it, you know and understandexactly why your web app performs the way it does, how to workaround it in your app, how to fix it in Chrome (or why you shouldn't), and how the CPU architecture affects the Chrome source code. Now you can apply Chrome, CPU architecture, and C++ to anything that is built upon any one of them (independently or otherwise). That's not to say you know everything about each of them, but you've learned things that will help you in the future in some cases.
The most important skill here is being able to diagnose a problem and fearlessly, relentlessly employ the engineering discipline of solving whatever problem/task is at hand, and not because of observed symptoms ("hey, I turned that knob and everything was OK! I don't know why, but I can close this JIRA ticket and move on with my life. I'm a 10X engineer!") but because you understand precisely what's happening. I made the mistake of the first decade of my software engineering career learning from trial/error and observations, and while those skills are useful in some cases, the best engineers are extremely disciplined about understanding the full depth of a problem before writing a line of code.
In a nutshell, I guess what I'm advocating for is do not blindly study man pages. The reason is because without a practical application for the knowledge, it seeps out of your brain and you forget it quickly. The exception (case in point, GP's example) is when what you're studying does have a practical application or is relevant to what you spend your time doing. This has always been my problem with academic curricula (sure, some people can learn well this way, and there's definitely a minimum foundation necessary that must simply be committed to memory). Even in basic subjects like maths -- the work is rote, and we maybe get a passing grade, but often without the understanding (or the depth of understanding) that is really the most important aspect of learning the subject matter.
I have optimized websites based on a basic understanding of how CPU's work before. For example it's much faster to do 200 checks on an object then load the next object vs doing one check at a time for each object and thus reloading objects 200 times. This ended up being a 30x or so speed up and seemed like magic to half the room.
It's not about knowing the minute details so much as understanding what's going on well enough to model it in your head.
PS: Assuming you are operating on lots of data, a small scale test can and did go the other way.
Good thinking but beware of what a CPU "is". I've just came back from intel.com boards and .. holy jesus, amount of details even memory locality level thinking ignores .. To leverage a processor you need to understand OS cache conventions and interaction with L1 and L2 caches and how these caches are wired to the actual core. Otherwise you're already losing 30% of the raw bandwidth.
I left with a strong laziness view on optimization. Profile based on what the business needs and ignore everything else or you will never escape the rabbit hole.
Most people wouldn't know an algorithm and concepts of algorithmic complexity if it bit them on the ass. Even devs. You don't need a PHD in computer science, just read some stuff and think a bit.
I completely and totally agree with you. I would only add that the best engineers also understand when to take a complex set of interactions and create a black box abstraction from them. They also understand when the abstractions are likely to leak and what the consequences are.
I am not putting that much weight into, 5 minutes a day is not a lot be familiar with the capabilities of the tools. A couple weeks ago, I had no idea that `jq` had a compiler in it. Many of us, myself included, use our tools in very shallow ways.
I think everyone has time for it, but it requires nerves of steel. You are thinking "I could just fix this the easy way", you are feeling social pressure to quickly get to the next thing. It's easy to decide "I can't take the time to really figure this out."
But if you can ignore the pressure and stick to your guns, you end up saving time in the long run, sometimes making orders of magnitude more work possible. Most managers should appreciate that.
But it's difficult to have the nerve to do it, and it can be difficult to explain in the short term. Like most opportunities there's a cost to pay up front.
Would that we as a profession developed an encyclopedia of ways of pushing back on "Is it done yet? How much longer?" completion pressure. There's certainly a profusion of lore about PFYs and lusers, why not structural business frustrations?
Well...yeah...that's kind of the whole basis of 'requirements elicitation', to understand what your client is trying to accomplish on such a level that they don't need to give you a list of tasks, you create the tasks that will accomplish want they need the system to do.
depends on what you consider your job to be, right?
for all I know, historically there must have been a lot of masons asking the same question when stacking bricks: "people have time to do that while building a wall? to carefully put mortar between the bricks??"
but nobody remembers those masons because all their walls have fallen apart by now.
("... wait seriously, even the inner walls?? but the boss never checks those anyway")
_This_ is the right path. Dig as deep as you need. Don't be afraid to get your hands dirty. So many people just randomly fear the 'magic' of the lower levels.
Do you have any tips for remembering the minute details in the manuals? Do you make flashcards, or do you re-read them repeatedly spaced out over time?
I find the volume of overwhelming, but I think I have a practice now that works well for me. Say I want to do something in vim, but it feels clunky. Part of me says "there may be a better way to do this", and I go looking for a way. I usually limit such a search to ten minutes or so. I'll stretch that if I'm getting closer.
It's not a hard science, but I think the two important elements are 1. Being willing to deep dive and 2. Monitoring how much time I spend to allow for reasonable stops. I come back to unsolved issues when they come up repeatedly. That tells me those are more important.
My personal process has a lot of parallels with "lazy" or "short-circuit" evaluation and "greedy" algorithms.
First, remembering the fact that certain information is out there is a lot easier to remember than the actual details of that information. Bits like "zsh has this crazy advanced globbing syntax that obsoletes many uses of `find`" or "ssh can do a proxy/tunneling thing and remote-desktop things with the right options, also it sometimes doesn't need to create a login session and sometimes it does" or "ffmpeg has these crazy complex video filters that allow you to do real cool tricks (therefore maybe the same for audio filters though I haven't actually read about that yet)".
Some of this is man pages, some of this is blog posts or stackoverflow answers. I keep my bookmarks well-organized using tags (in Firefox, Chrome doesn't seem to have tagged bookmarks for some reason, last time I checked). Whenever I find something that seems it may be useful some day, I bookmark it, tag it properly and sometimes I add a few keywords to title that I am likely to search for when I need the info.
Then, given the knowledge that some information is out there, I allow myself to look it up whenever.
I've never been very good at rote memorization, at least not doing it on purpose. I often lack the motivation to muster up the will and focus required. So I don't force myself, but somehow still remember stuff any way.
There's so many tiny things in such a wide field of interests, I don't even really want to memorize all :) So I cut it down to knowing the existence of information (and sometimes, classes of information).
Then maybe some day I'm working with some particular features of ssh or git, and I notice myself looking up the same commands or switches a few times over again. So apparently I'm not memorizing these. Then, I make a note. That's not a very organized system, it can be a post-it, a markdown/textfile, an alias, a shellscript, a code comment, a github gist. I used to try and keep one textfile with "useful commands and switches and tricks and and and", but I found myself never looking at it, so I stopped doing that. Instead I try to put the note somewhere I'm likely to come across when I need it in context.
The way Sublime Text just remembers the content of new untitled textfiles and then allows you to organize these groups of files into projects, quick-switch between projects using ctrl-alt-P, is just perfect (or shall I say, "sublime"?). It allows a random short note evolve organically from temporary scratch to a more permanent reference note.
I also download some reference manuals, so I have access offline, which is often significantly faster to quickly open, check and close. For instance there's a link to the GLSL 4 spec in my start menu, which instantly opens in katarakt by just pressing "alt-F1, down, right, enter" -- a leftover from a project where I was reading that thing all the time. After a while I added a shorter webpage-converted-to-markdown reference to the sublime project file, and now I use it less.
I guess the shorter summary is: Yes I do have tips, but they are what work for myself, but the more generally applicable advice is: yes there are tips and there are tricks and they are whatever works by any means necessary, but most importantly: yes, there are tips and tricks, and some of them will work for you too! :)
RTFM is a weird boundary. I've 'wasted' so many hours dabbling in tutorials made by other people instead of diving in real information: specs, source. It's a mental click, maybe it seems overwhelming, maybe it seems too broad and we're too impatient to read a chapter to get an answer. After a while 1) you get more patient 2) you know other sources won't help ... all of a suddent specs looks like fun reads.
ps: I was just on www.ecmascript.org/es4/spec/, historical artefact but full of surprises.
I'd say it's the exact same problem. The structural engineers are looking at these pre-designed components as black boxes, not bothering to understand why they were designed the way they were, what they do internally. A huge portion of software engineers sees the components they work with (such as HTTP) as black boxes, too. This means that when the engineer is considering its use, they cannot effectively consider how it will hold up in the particular situation they are dealing with.
You need to know at least nominally how the sub-components interact, or you can't predict how something will perform when you use it. Even the strongest abstractions leak a little bit.
I'd just like to mention one counter argument, which the rigidity of the code in some cases will help to protect against developers from using cheaper materials, or new materials that on paper seem better, but in reality may not be better.
An example I've heard of but am having trouble searching for the exact name, is in condo buildings here in Canada they started using a new piping material, to do the water deliver inside the units. The problem was, while I think in theory the material was better, it has the property that when it fails it fails catastrophically, due to either a manufacturing or installation defect. So instead of just having a small pinhole leak, the piping will split when compromised, and you have many thousands of dollars in damages to multiple units. Buildings with this material can no longer be insured in Canada.
So I don't know that I would really trust giving developers a wide latitude in selecting materials, even if they sound great on paper.
I don't know what the solution is, because I agree we need to be more flexible, and have a way to introduce new and better technologies, but we also have to be diligent, in ensuring that the new technologies and processes work the way we expect them to, and have the desired effects.
In Canada we use what is called limit states design and it handles the sudden failure vs gradual failure already. Essentially if you want to use a member or a material that suddenly fails you must design it to fail one tenth as often. In practice engineers go even more overboard because nobody wants blood on their hands and gradual failure greatly decreases the odds of that from happening.
In terms of construction materials in condos failing (glass and piping) I actually put the blame for those two failures on the individual testing companies. Engineers in Canada don't understand statistics properly because most aren't taught materials testing properly in university. But we don't live in a utopia either. It is expected that things will go wrong sometimes and that we'll have to update our building codes to address those shortcomings. My issue with Canada is that we don't have a (or at least I'm not aware of there being a) structured materials testing code.
Sure we'll regulate materials and connections for our main structural elements, but there will always be that last mile where someone will want to do something weird (like build part of the building under railway or if someone over bent some reinforcement bar and you need to authorize them rolling it back) and you're pretty much out on your own once you come to those scenarios. It's doubly unfortunate because most of the engineers that just look things up in tables are basically helpless because they don't remember or use the basics day-to-day. So they end up being extremely conservative - unnecessarily wasting money and material for all of us.
I don't practice in Canada, but my understanding was that the CSA is an analogue to the ASTM codes here in America (and internationally apparently, if you believe their name change). ASTM codes are very thorough when it comes to materials testing. I believe Europe has a similar standard.
I don't understand how your third paragraph's thrust follows from your second paragraph - what does materials testing have to do with site specific (railway) or field changes (bent rebar)?
How long did you practice in Canada? Your viewpoint of engineers meshes well with the opinion that I've heard from a lot of junior level engineers who are just making the adjustment to a mid-level position but are still interacting with the lower level staff who are, as you say, typically helpless. They are supposed to be - they are still learning.
Only about 3 years before I got fed up and left. I'll fully admit I didn't get that involved with materials testing; and perhaps the firm I was with was substandard in this regard; but I don't think so. When I looked into the falling glass in Toronto I learned that they only tested a very small number of fasteners. I don't recall the number, but it was trivial statistics to prove that for the number of glass panes going up in Toronto they didn't have a large enough "n".
The two examples I gave were two examples that I dealt with personally. I was extremely dismayed at the rigour the firm I was at used. To test the bent rebar I think we used a sample size of 6 and then tested to failure. For the railways example they just used the weight of the train. Then when I reviewed the designs and brought up that the train could apply the breaks and thereby increase the downward force they just multiplied everything by 2.
In my experience the low level staff was useless, with a couple people that knew what they were doing. The medium level staff had two groups of people, the people that still knew advanced math and the people that got good at AutoCAD, and the senior people, while good at sales or general guidance; had basically completely forgotten all but the most basic structural engineering principles. I've literally had to explain crushing vs bending moment to a 20 year structural engineer before. I've (accidentally) designed a structure that was already designed by a senior person that forgot to put it in the tracking system. I used one sixth the steel and mine could handle more load.
I will grant you, however, that I may have just been at a substandard firm. We had some large projects, but we weren't designing new skyscrapers or mega-structures.
For someone who is self-admittedly not knowledgeable about the testing requirements, you seem very certain of your conclusions about this Toronto falling glass problem. Testing of components is usually by the manufacturer and it is their responsibility to provide a product that meets the requirements of the design. This is not a problem from the design side and is very difficult to prevent without the engineer being onerous with his requirements to a point that no engineer is really willing to go to.
I'm not familiar with the specifics of your rebar example so I can't comment. Your example with the train makes no sense - the design loading for railway is codified in the design manual (AREMA in the USA) and includes dynamic forces. Braking forces are applied longitudinally to the track so unless you were in a curve there is no downward force. I find it hard to believe that your boss agreed with a fictitious force and then just multiplied everything by 2 to get around it.
Your opinion on your colleagues is concerning to me and is probably more indicative of your lack of experience than the other staff's incompetence. Your experience reads like someone suffering from 200th hour syndrome, I wouldn't be surprised that if you stuck with it another 3 years you would have realized your initial impressions were way off base. At worst, it sounds like you may have been working at a firm that did commodity work and didn't attract top tier talent. If you are as good as you seem to think you are then you should have jumped ship when you got "fed up".
I don't intend for this post to sound dismissive but it will probably come off that way.
As an aside, knowledge of advanced math is not necessary for structural engineering in my opinion, nor is it common.
"Braking forces are applied longitudinally to the track so unless you were in a curve there is no downward force."
Why is that? Is it because the cars behind are pulling on it and keeping the usual forward weight transfer from happening?
Think of a motorcycle doing a "stoppie" i.e. read wheel is in the air under braking, all weight is on front wheel.
This is hard to describe without being pedantic and without being able to draw but I will attempt.
A motorcycle performing a stoppie experiences rotation because the inertial force couples with the braking force to create a moment about the front axle of the bike (this isn't technically correct language but you get the gist). While this idea holds true for the train, we have to take into account the differences in mass and contact between the two systems. Train cars typically ride on more than 2 axles and this provides stability from front or rear tipping. Train cars are also typically very heavy meaning that the braking force is not sufficient to 1) move the center of forces of the system ahead of the front axles and 2) tip the car. Increases in load because of this are, therefore, not sufficient to double the load on the front axle as you would see with a stoppie.
In general I agree with the idea that is put forth; however, it is important to note that what we are discussing is the BRAKING force. The inertial forces that result in differential axle loads is not a braking force (certainly, it is a result of braking in this case but this force also exists when a train begins pulling). These loads are DYNAMIC loads and are already considered in another part of the analysis. Dynamic loads also include consideration for bumps, etc. Because of this, the code is explicit that braking forces are applied only longitudinal to the track so that the forces are not counted twice.
Thanks for responding, and I'm very happy that there are people like you out there; but trust me when I say that despite my poor recollection from the time I practiced, my fundamental point is not wrong. Most engineers I've worked with in Canada are not to be trusted with advanced design. If you disagree I'd like to really talk to you about it because I felt like I was surrounded by people that had no idea what was going on and I would really like to be proven otherwise.
I'm not competent to explain why Tacoma Narrows failed, but it obviously wasn't up to design load. The existence of large sheets of glass falling off buildings and endangering people from multiple different installations strongly indicates that someone botched something.
The 200th hour syndrome refers to when a pilot has reached their 200th hour in the air and their confidence in their abilities is enough that they begin to get careless:
"Enough experience to be confident, enough to screw up real good." is how a nice TV show put it.
You're right, one reason engineers stay conservative is if the new ideas have problems that show up later, the engineer gets the blame. Hence he sticks with "nobody ever got fired for buying IBM".
For example, when my house was built I wanted something better than fiberglas insulation. After research, I settled on icynene foam. The contractor refused to use it, because he'd be to blame if it went wrong, it would be an enormously expensive retrofit. He finally agreed after I formally agreed to accept the risk and not blame him.
15 years in, and the icynene has been great. No troubles at all.
I don't really understand why you left. I would have loved to employ someone like you back when I needed a structural engineer. You had an opportunity to make a lot of money by shining head and shoulders above your competition.
The only problem I see is a marketing issue - being able to get the message out to your customers of the advantages of going with your firm.
The City of Toronto blacklisted us because our engineering fees were over 25% of the construction cost. If you're still in engineering and want to meet send me an email. I think there is loads of opportunity to make building buildings more efficient and I would love to combine my undergrad with computer science.
"...which of course lead to our engineering fees looking like a larger portion of the job."
So there you go. Mismeasurement is 99% of all business problems. I've been laid off before because I didn't generate the (apparently) required level of software defects and missed a gate or two by a day or two doing it. I was actually told I didn't look busy enough.
If most are unaware of it, is it then good practice (aka insurance) for you to cite the relevant codes in your plans, in order to avoid trouble by inspectors due to the differences, or is it safe to expect an inspector to know the codes A-Z?
Inspectors do not typically check plans and calculations for adherence to the code, they check whether the work matches the plans. Checks for whether the plans match the code are generally done by the reviewing agency, to varying degrees of effort. For example, an agency will check for adherence to their design manuals (which are the governing code for their work): Caltrans against their manuals, railroads against AREMA, building departments against the building code, etc.
And yes, it is good practice to cite the code as appropriate but it isn't necessary - any questions by the agency will be sent as comments and approval will not be granted until they are satisfied.
You've identified a fantastic application of AI, when the technology gets there. A machine can read all the codes with better retention and would have the patience to make these optimizations. A human in the loop could offload some of the judgement-intensive aspects.
Why don't you have that AI read all manuals not just construction ones. Also all legal texts, all medical texts and all computer texts. It could optimize everything!
> The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.
You joke, but that is a great idea. IBMs Watson was helpful to doctors by being able to reference huge amounts of medical literature. Basically a domain specific natural language search engine. I've read about similar software for lawyers searching for relevant laws and cases.
The AI doesn't need to be so good it can replace the doctor/engineer/lawyer. It just needs to be a helpful tool for finding relevant documents. NLP and QA is just starting to get good and available to the public, so I think this will be a big thing soon.
That's one problem with government regulations. Even when they're actually well designed and impartial, they don't get updated when technology changes.
After a few decades, the once great regulations are often way out of date.
Well yea but they have to exist because of a failure in the free market. Without the regulations the free market doesn't have a good way to prevent cheap unsafe construction, especially when the problems aren't evident for a couple decades.
There might be alternatives. For example, requiring large and excessive liability insurance. The insurance companies then have a huge incentive to make sure buildings are safe and last long term. But the laws about what building materials you can use aren't set in stone, and different companies can have different rules.
There are multiple problems. The biggest is that corrections only kick in only after a failure.
Suppose a oil tank farm is incorrectly built, a tank breaks, and the oil gets into the nearby river, killing fish and preventing it from being used as drinking waster for 100 miles downstream for the next week.
Who gets to sue if the insurance company decides to not pay? How much does each lawsuit cost just in lawyer time? How long does it take to get through the legal system?
If the insurance company doesn't have the funds, what happens? This can easily happen because the company has every incentive to find the least expensive insurance company, or the company might be overstretched, like Lloyd's in the 1980s and 1990s with the asbestos, pollution and health hazard suits.
Or it can be more perverse, like an insurance company set up as a front, working with unsafe companies, and where a significant problem or payout simply triggers bankruptcy, and no remuneration.
If different companies have different rules, how easy will it be to switch insurance companies? Because that sounds like a great way to get lock-in. Can I bring my own building inspector in or do I need to depend on the insurance company inspector? Will the codes be public information?
The insurance companies themselves would have to be regulated. This moves regulation up one meta level, and makes it a simple matter of verifying they have the assets necessary to pay out. As opposed to having the regulators figure out what building materials are good, and every other detail about the construction industry.
Insurance companies are usually themselves insured. If they go bankrupt, e.g. a natural disaster or something unexpected happens, another insurance company has to cover it. This is possible because they insure multiple industries and geographic regions, so can take a few hits.
The details will need to be worked out, but I don't see why it would be anywhere near as complicated as working out the details of building codes. Regulations are complicated, we already accept that. This is just a way to significantly simplify it.
Figuring out whether the insurance companies have the assets necessary to pay out is never a simple matter. They don't have the assets to pay out if every policy is fully claimed at once, almost by definition, which means that it's a complicated matter of assessing how many policies are likely to be claimed at once, which requires figuring out details like whether the construction industry is making widespread use of a potentially dangerous material. Insuring the insurers can only help so much; Lloyd's was the insurers' insurer of last resort, and as
dalke points out asbestos claims wreaked havoc on them.
Or when Hurricane Andrew hit Florida in '92. 11 insurance companies went bankrupt after 600,000 insurance claims were filed. Almost 1 million people lost insurance coverage.
One of the results was a stricter and statewide building code instead of piecemeal codes. Successive hurricanes helped give evidence for the usefulness of the new code.
Another was state trust fund to ensure sufficient insurance capacity.
You could have both regulation paths: Either you build according to code, or you build whatever you want so long as it has plenty of liability insurance.
As a side note, I really wish more of these sorts of alternate regulations existed. They would be very useful fallbacks for when regulations become outdated or overly restrictive. Another concrete example of this is automobiles. Right now in the US, you can't bring a consumer car to market without extensive crash testing from the NHTSA and fuel economy tests from the EPA. These high fixed costs eliminate enthusiast and niche manufacturers. If the law said, "Any model that doesn't pass these tests incurs a $10k (or 25% or whatever is onerous enough) tax on each vehicle.", it would allow for new manufacturers to enter the market with far less capital.
Actually a new model is coming in construction where the contractor would be responsible for maintenance for a couple of decades. Sounds like a win/win - this is obviously guaranteed income for the contractor unless they screw up something in the construction phase. Someone can probably remind me what this system is actually called.
Does the International Building Code that many jurisdictions in the US use help here at all? It's revised every 3 years. But I could also see it being overly conservative and holding back local innovation.
As I understand the parent it's not that the regulations are outdated but rather engineering firms are unwilling to take advantage of all that the regulations allow.
I'm not in construction or an engineer but when I was watching Mike Holmes he said the "code" (building code) really means minimum building code. I had never really thought of it like that before it was a good point why strive for minimum?
You can build better there's no reason you can't (cost obviously) but most people just aim to barely pass minimum building code.
That's a bit misleading. Yes, new construction must at least meet the building code. But it's not like it's been designed to be just enough. There's a heavy safety factor in the codes. In some cases they work to reduce the code because it's too much.
"This lead to my buildings being cheaper / easier to build, which of course lead to our engineering fees looking like a larger portion of the job."
I propose you could start a consulting gig and maybe your own firm. Can't be that there aren't people who want their buildings to be cheaper for the same quality.
For whatever reason, people in general prefer to pay for things, not knowledge. It's much easier to ask for $10K more of materials than it is for $8K more of consulting, even if the latter is actually 20% cheaper. Human nature.
The problem with structural engineering is incentives. It is one of the reasons that I left. Most structural engineering companies are filled with conservative, boring engineers that prefer to look up pre-designed segments and don't make full use of the steel design handbook or building codes.
For example, in Canada if you have a non-load bearing brick outer face (most brick buildings in Canada) you're allowed to reduce the wind load by 10%. I was the only person I knew that knew this because I actually read the steel, concrete, and wood design handbooks front to back while I made notes. Furthermore almost nobody has read the building code "just because" they might hop to a section here or there when they need it, but they're generally not going to just sit down and read the thing.
So when I would design buildings I would be able to take advantage of a lot more things than most people. This lead to my buildings being cheaper / easier to build, which of course lead to our engineering fees looking like a larger portion of the job.
The problem with reinforced concrete is the same. Engineers have no financial incentive to make alterations to their designs to make the buildings last longer. It is almost trivial to make sure steel wont rust (or to double or triple a buildings life) but it makes construction costs go up 0.01% and makes engineerings fees go up 0.1% so nobody does it. Regulators are to blame too. There are amazing concretes (Ultra High Performance Concretes) we should be using in our buildings that completely lack even needing steel because they are so ductile and strong (MPa 200 for the one I was familiar with, Ductal by Lafarge), but it's impossible to use in construction in Canada because the code is so rigid.