Having an API is well and good, but if there are no ways for third-party apps to actually authenticate and use the API, it’s not very useful.
Background
While the framework for the REST API was merged into WordPress Core with the 4.4 release, the only means of using any endpoints that currently require authentication are what is known as ‘cookie authentication’ — that is, piggybacking off of the authentication cookies (plus a nonce) that WordPress Core sets in the browser when you log in traditionally to your WordPress site. Unfortunately, that leaves the REST API as little more useful than the legacy `admin-ajax.php` file.
Fortunately, there are several authentication methods being worked on at the moment, in plugin form, for consideration of merging in to Core.
I’m heading up one of them, called Application Passwords. In short, it lets a user generate as many single-application passwords as desired — one for each phone, desktop app, or other integration desired — and later revoke any application password whenever desired without affecting other applications or the user’s normal authentication. The passwords are then passed with each request as Basic Authentication, encoded in the header of each request, as per RFC2617.
The other plugin is OAuth 1.0a authentication (spec). Most OAuth usage across the internet is actually OAuth 2.0 — however, OAuth 2.0 requires HTTPS on the server side. Ordinarily for most hosted services, this is not a problem. However, for a distributed platform like WordPress, this is untenable due to the majority of sites not supporting HTTPS. So an older, more complex specification — designed to not require HTTPS — had to be used.
For the record, I’m fully expecting to see an OAuth 2.0 plugin be built in the near future for use on sites that have a SSL certificate and support HTTPS. However, that may not be very useful for app developers that want a ‘build once, run everywhere’ authentication method that will always be available.
Now, this is a very interesting question, and it can lead to many more questions — such as if an Application Password shouldn’t be usable to create or delete other Application Passwords, whether they should be allowed to do other user-administrative tasks (providing the relevant user has those permissions). After all, if we’re preventing them from making a new Application Password, but they can just go and change the user’s actual password or email address, it’s a rather silly restriction.
So, there are several possiblilities.
First, you can just say “Any ways in to your account give full access to everything your account can do. Be careful what applications and websites you give access to.” — the most basic, relatively easy to understand way. Honestly, this is my preference.
Secondly, when creating a new Application Password or connecting a new client via oAuth, you could do something like … selecting what ‘role’ you’d like to give that connection. For example, if your normal user account as an Administrator, but you’re connecting an app that’s intended just for writing blog posts, you may want to downscale your role for that authentication key to either an Author or perhaps an Editor. An advantage to this is that it would be more cross-API — that is, it would work just as well with the legacy XML-RPC API, as with the new REST API.
This ‘role restriction’ method may be somewhat fragile, as it would need to only filter `current_user_can` and `user_can` — but only when checking the current user’s ID. However, that may goof up some cron tasks that may run on the same request as the REST API request or other incendtal things.
Thirdly, we could do something REST API specific — either whitelist or blacklist REST API endpoints based on authentication key. So, when either creating a new Application Password or authorizing a new OAuth connection, you would set rules as to what endpoints it can be used to hit. Perhaps you’d want to allow all `wp/v2` namespaced endpoints, but no endpoints added by plugins to custom namespaces. Or you want to only allow it to access core `posts` and `taxonomies` endpoints. Or even something like allowing it to access anything but `plugins`, `themes`, `users`, or `options` endpoints.
The downside of this is that it won’t work with the legacy XML-RPC API and the user interface for it would likely be far more confusing for users. It also could get problematic as permissions may vary for who can read `options` endpoints and who can write to them — or the like. So then it may further complicate to allowing GET requests but not POST, PUT, DELETE requests to certain endpoints.
Your Thoughts?
In the end, I’m not sure what the best path forward is. Maybe I’ve missed something. But I am confident that we need to start paying more attention to authentication and permissions for the forthcoming REST API. If you have any thoughts or remarks, please leave them in the comments.
Sidenote: I wrote this article two months back, at WordCamp US, and am only now getting around to posting it. Sorry for the delay.
A few months ago, the culmination of nearly two years of internal work was open-sourced to the world. A new WordPress admin interface built by Automattic on top of Javascript technologies like Node.js and React, codenamed Calypso.
A lot of folks got excited and a few got scared. Some folks got a bit confused. But I haven’t heard many folks drawing the parallel and writing the explanation that I’ve cobbled together, so here’s my take on it:
For a while now, there’s been a variety of WordPress apps — iOS, Android, and some defunct ones for Blackberry, Windows Phone, WebOS, and the like. They all interact with WordPress via the existing XML-RPC API. They’re not really extensible for plugins — if someone installs “The Events Calendar,” events don’t start showing up in the app as they would in your traditionally PHP-generated WordPress Admin UI.
Calypso currently mirrors those mobile apps more than anything else, just shifted to a wholly different space. Instead of using the legacy XML-RPC API, which is meant primarily as a way to publish content and has a host of issues (sending user passwords in plaintext with every request, for one), it uses a REST API with far better authentication. Instead of running on mobile devices, it runs either in your web browser at WordPress.com or encapsulated in a Desktop app. But at its core, it’s a self-contained administrative application for WordPress sites.
I feel that the biggest win in releasing the Calypso interface, though, is that it can remedy a situation that’s festered for some time now. The current WordPress mobile apps are maintained by a crew of mobile developers that work for Automattic.
Development happens in the open, and community contributions are welcomed, but they are comparatively few and far between — largely due to the fact that the majority of folks who are really passionate about WordPress are primarily familiar with the languages that WordPress is written in — PHP, CSS, Javascript, MySQL, etc. Very few have much interest in leaping into App development. Likewise, most mobile developers have their own set of problems that they are passionate about solving, and volunteering their free time to build and maintain an administrative app for WordPress would ordinarily be low on their list of priorities.
So, it falls to someone to find and hire Mobile developers to maintain the assorted WordPress mobile apps.
With the release of Calypso, though, there is a bit of a paradigm shift. Calypso, being written in Javascript, is already in the skill set of many of the folks who are already passionate about building the core software, as well as those that leverage WordPress to build sites.
This, I think, will mean a significant renaissance of community interest in API-driven apps for maintaining a WordPress site. I also predict that we’ll see a lot of folks passionate about WordPress forking Calypso and tweaking it to make customized apps and distributions for specific clients and use cases — which will only expand further once REST API endpoints ship in WordPress Core, and Calypso migrates to use those, instead of the WordPress.com REST API.
I know my password. I still have my cell phone number that is set as a recovery number.
So how could I be locked out?
Several months back, I changed my cell carrier to Project Fi — the Google amalgamation of T-Mobile, Sprint, and WiFi networks to provide better coverage at lower cost. I’ve been thrilled with the service (like, seriously ecstatic), but there are some odd issues that have cropped up.
For one, it seems that text messages from SMS ‘short codes‘ don’t go through. This has been a known issue for some time now with Project Fi. I first found out about it when trying to set up Google Wallet with USAA — which runs through an automated SMS short codes system. Many automated texts do work without issue — Dropbox, for one.
So I just spent a full hour (I clocked it at the end) on the phone with a very empathetic and understand customer service rep from Apple. Unfortunately:
I don’t have any iOS or OSX devices currently logged into my iCloud account (I had just voluntarily switched my primary computer to Windows for work)
I can’t receive txt messages from their automated system at the phone number I have on file (despite the fact that I called them from that number and they can call me back at that same number)
I don’t have my recovery key (it was about three years back when I first turned on two-factor authentication, and I have absolutely no idea where I would have stored it)
So there is absolutely nothing that they can do for me, it seems.
I mean, I understand this to a point. The rep I had on the phone was very apologetic, but the system that they built just doesn’t account for the fact that perhaps sometimes phone numbers lose the ability to receive text messages.
They knew I was who I said I was — I was calling from my number on file, I had all my credit card information, I could authenticate the first step of logging in. They could even call me at the phone number they have on file. But because they couldn’t text me, they couldn’t — not wouldn’t, but actually couldn’t — help me.
But this isn’t meant to be a sob story, or a tirade against Apple. Okay, maybe a little bit of a gripe, but I’d like it to be more a focus on the importance of considering edge cases in development.
The customer support reps are incredibly constrained. They knew without a doubt that I could receive calls at the phone number in question. But they weren’t empowered to do anything about it. They escalated the issue, and it seems no-one was about to do anything, apart from offering their condolences that I won’t be able to log into my account.
If there is one take-away from this, I suppose it’s to enable your customer support reps to actually do their jobs. I know Apple has gotten burned in the past on hackers gaming the system, but it’s the importance of being judicious when dealing with requests, not barring the doors against any.
—
As a semi-related note, I’m heading up the group working on bringing Two-Factor support to WordPress core. Two-Factor is something that I believe in deeply, I just also believe in the importance of carefully building out the systems that serve as back ends to such methods of authentication.
Faro Shuffling is a technique where two packets of cards are pressed together and stitch themselves together, one to one. It takes a goodly amount of practice to actually pull it off reliably, but if you can, it’s a tremendously fun skill to have.
There are two primary kinds of faros performed on a 52-card deck, that being an ‘In’ and an ‘Out’ faro. An ‘Out’ faro is one where the top and bottom cards of the deck are preserved on the outside of the deck, an ‘In’ is where they are moved to the interior of the deck.
If the user can perform perfect cuts and faro shuffles, it will take 26 ‘In’ shuffles to completely reverse the deck of 52 cards, and another 26 shuffles to return it to original order. However, by performing ‘Out’ shuffles, the full deck will be returned to the original sequence in only eight shuffles.
I’m working on mastering this, because it’s dang fun and appeals to my brain. And, as a predictable way of reordering decks and can be rolled into some fun illusions.
For reference, standard new deck order is A♠️-K♠️,A♦️-K♦️,K♣️-A♣️,K♥️-A♥️.
While performing eight perfect cuts and ‘Out’ faros, if you start with standard new deck order, these are the eight cut cards:
I’m currently experimenting with possibilities for making a combo halloween costume that I could wear with my daughter this year, and I’d always wanted to be able to add a flair of the dramatic to costumes, and smoke is one of the best ways to do it. Especially when it’s just a touch here or there.
I want it to be portable, and affordable. Both of these are kinda requirements, honestly, for a once-per-year halloween costume.
In doing some research online, I saw an offhand remark from someone about e-cigarettes, vaporizers, whatever you like to call them, and the more I thought about it, the cleverer it seemed. The recent pivot in the nicotine industry had driven down the cost of e cigarettes to the point where I could buy a “V2EX Automatic EX Starter Kit for E-Liquid” for about $12 at my local gas station.
Keep in mind, that this is just the e cigarette, not the ‘e liquid’ or the nicotine-laden stuff that makes it go. By my understanding, that’s the far pricier bit.
So, e cigarette (rechargeable miniature smoke machine) in hand, I’d need at several more things: the fuel that makes it go (as I have no desire for nicotine or flavoring, I decided to forego the ‘e-liquid’), some sort of pump to operate the ‘draw’ that activates the e cigarette, and some way of getting the smoke from the e cigarette to where I want it.
The primary ingredient in ‘e liquid’ is a fun little compound called Propylene Glycol, and indeed you can buy it without the nicotine or flavorings much cheaper — if you have a Compounding Pharmacy anywhere near you, they normally sell it for probably about $10/pint — far more than you would conceivably need for a little smoke machine, but the point is that it’s cheap. It’s also available on Amazon Prime. You don’t need to mix it with anything, you can just pour it directly into the refill area of the e cigarette. Granted, you may want to get an eyedropper or syringe with a blunt needle to do it with, so you don’t make a mess.
Now, we need a delivery method.
I had initially been envisioning some sort of one way dinky little plastic air pump with some hose on it that I could hide either under an armpit or behind a pushable button somewhere on the costume, but while trawling Amazon, found a great option — a 6′ tube with a siphon pump. Going by its reviews, it’s made just as cheaply as the price indicates, but for our purposes — a one night costume — the price ($7) is right, and free shipping on Prime.
It is missing one-way valves, and I’ve got a set of those coming — again, Amazon Prime — but I don’t have them in hand quite yet.
In all, it’s come out to just about $30, $35 with the one way valves, and it feels totally worth it to add an incredible effect to a costume.
So, all things considered, I’m expecting to have a pretty fun instant smoke addition to a halloween costume this year. And with the leftover propylene glycol? Maybe I’ll just practice making smoke rings. 🙂
One friendly warning, though — you do not want polyethylene glycol. That’s a laxative. 💩
Last weekend I attended EdgeConf, a conference populated by many of the leading lights in the web industry. It featured panel talks and breakout sessions with a focus on technologies that are just now starting to emerge in browsers, so there was a lot of lively discussion around Service Worker, Web Components, Shadow DOM, Web Manifests, and more.
EdgeConf’s hundred-odd attendees were truly the heavy hitters of the web community. The average Twitter follower count in any given room was probably in the thousands, and all the major browser vendors were represented ? Google, Mozilla, Microsoft, Opera. So we had lots of fun peppering them with questions about when they might release such-and-such API.
There was one company not in attendance, though, and they served as the proverbial elephant in the room that no one wanted to discuss. I heard them referred to cagily as “a company in California”…
A friend shared a trailer of the upcoming movie Suffragette on Facebook yesterday. Apart from the fact that the movie itself looks amazing, I was struck by the stunningly beautiful rendition of Landslide overlaid on the second half.
I was immediately taken aback. It sounded very reminiscent of Imogen Heap, but then again, not. I left comments, sent tweets, and finally — while I slept last night — got an answer.
I’ve never heard of Sherwell previously, but will be looking up her stuff up later today. For the curious, here’s the straight version of her Landslide cover off her Soundcloud page:
— george stephanis toots on mastodon (@daljo628) May 11, 2015
After I published this, I had someone from cPanel reach out to have a more in depth conversation than it’s really possible to manage in a medium that caps you at 140 characters.
In the interest of transparency and context — as well as showing cPanel’s efforts thus far in working to fix things, here’s the conversation that transpired on that ticket — #6489755 on their internal ticketing system. Any modifications on my part are purely for formatting, as well as omitting names of customer support folks.
cPanel:
Are you using the cPAddons tool within the cPanel interface to install & manage WordPress? If so, then yes, we disable the auto-update functionality within the application so the updates can be managed from the cPanel interface itself. The way our cPAddons tool tracks software is not compatible with the way WordPress updates, hence why we disable the auto-updates so we can track it through cPAddons.
If you’re not using the cPAddons tool to install/manage WordPress and have concerns of us modifying the core of the application, please let me know.
Regards,
—
*********** ********
Technical Support Manager
cPanel Inc.
Me:
I’m a Core Contributor to WordPress, not a cPanel User. I was speaking up on Twitter because I learned through some Forum threads that y’all were doing some very problematic things — which I’m hoping to address here.
Just to make sure we’re talking about the same thing, the three changes that I’m aware of are specifically noted are:
return true; // Force this functionality to disabled because it is incompatible with cPAddons.
(Please note that all my code references to WordPress core are aimed at the latest revision of the `master` branch on GitHub)
It looks like when you’re hacking core, you’re turning off not merely Automatic Updates (as you suggested prior), but all WordPress Updates as a whole. This is a Very Bad Thing. If you were merely disabling Automatic Updates, but still leaving the user with the ability to use WordPress’s very well established upgrade system, that would be something else entirely — and in fact is documented extensively here: https://make.wordpress.org/core/2013/10/25/the-definitive-guide-to-disabling-auto-updates-in-wordpress-3-7/
— and can be done by adding a single line to your generated wp-config.php file when installing WordPress:
define( 'WP_AUTO_UPDATE_CORE', false ); # Disables all automatic core updates:
Why do you feel the need to fully disable all updates from within WordPress and force users to use either cPanel or FTP exclusively to upgrade WordPress? Why can’t they work in conjunction with one another?
Clearly, users have been dismayed and shocked when their installs haven’t been notified of security point releases that are available as y’all have killed the `get_core_updates()` function. Many don’t even realize they may need to go into cPanel to upgrade their WordPress install, and so their installation is left at an outdated, insecure version that is incredibly vulnerable to exploit.
cPanel:
Thanks for the followup. The WordPress management through cPAddons is quite old, and very well may have been in place prior to having define( 'WP_AUTO_UPDATE_CORE', false ); within the WordPress application. I’m uncertain of that as I do not know when WP introduced that function but from Googling it the oldest result I can find is from 2012.
That said, cPanel does in fact do a few things with cPAddons in regards to customers who have out dated versions:
Whenever WP releases a maintenance build that addresses security concerns, we react very quickly to get our software updated to be available to customers.
By default, we define that software managed/installed through cPAddons is automatically updated when a new update is available.
Based on the above information, if the server administrator leaves the defaults enabled, once WP introduced a maintenance releases that corrects security concerns and we’ve tested and updated our source for it, customers will receive the release automatically.
If the server administrator decides to disable automatic software updates, the end user and systems administrator will still receive notifications that their installation is out of date accompanied with steps on how to update their application.
With that, I can definitely appreciate the concern for making it as easy and automated as possible for users to get updates for their WordPress, there’s definitely more to the situation that solely disabling WordPress’ automated updates in the core.
I’ve submitted a case (#188545) with our developers to have the logic for disabling updates changed from it’s current behavior, to using define( 'WP_AUTO_UPDATE_CORE', false );.
If you have any other questions, feedback, or concerns, please don’t hesitate to let me know.
Me:
By default, we define that software managed/installed through cPAddons is automatically updated when a new update is available.
Clearly, that’s not the case in practice. As the user who discovered this remarked:
In my audit, it appears that — of 37 total WP-based websites on our server — we have 14 that have not updated to the latest version of WordPress. Of those 14, the oldest version is 3.9 (which 3 of 14 are running), and the newest is 4.1 (which 4 of the 14 are running).
If cPanel wants to manually update sites to current releases, I’m fully, 100% in favor of that. It’s a solid step to a safer, more reliable web.
My issue is that y’all are preventing users from updating themselves via the existing WordPress infrastructure. There’s really no reason for the blocking of an existing stable upgrade system. If you just need some way for cPanel to be notified on core upgrades, it’s relatively trivial to set up a perhaps 20 line function that will notify cPanel when it happens — and that will even account for users manually updating their WordPress installation via FTP, which your current version seems as though it would break during.
Here’s a proof of concept:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Also — I’m not a lawyer, but distributing a forked version of WordPress and still calling it WordPress (rather than cPanelPress or something) may be a trademark violation. Forking is 100% fine under the GPL for copyright on the code, but may be problematic from a trademark perspective — as what you’re distributing isn’t actually WordPress, but rather a hacked up version. If that makes sense?
cPanel:
Thank you for contacting us regarding people’s experience using WordPress as distributed via our Site Software feature. This feature is a method of installing and managing various third party applications. Applications installed via Site Software are intended to be managed entirely within Site Software, thus in-application updaters are disabled. Allowing the in-application update to proceed will cause a conflict between the updated application and the Site Software, which can easily result in confusion. From this perspective what we are doing is no different from Debian and other Linux distros that distribute applications with in-application updaters.
We generally release the latest version of WordPress within 1 to 5 days of the latest WordPress update. At minimum server administrators are informed each night of all Site Software applications that need updated. It is up to user’s to configure their notifications within cPanel to receive such updates.
Within the Site Software user interface, users are able to upgrade all applications that are out of date. In the admin interface, a server admin can choose to upgrade all Site Software applications on the entire server.
Based upon what the Drumology2001 user reported on the forum it appears something is amiss on that server. We’d love to examine that server to determine why WordPress updates were not available to the user. Based upon the fuzzy dates used on the forum, and compared with our internal records, the 4.1.1 update was available to the Site Software system prior to the initial post. We’ll reach out to him to determine whether there is anything we can do there.
One of our concerns is that in-application updaters are incompatible with the Site Software distribution method. There are various things that could happen due to updating a Site Software managed application outside of Site Software. At minimum it means that from that point onward the server admin, and potentially the user, will be informed of a software update that is no longer needed. At worst someone will force an update that results in corrupting the installed application.
Handling those in a way that reduces frustration for everyone, and keeps support costs down is important to us.
Based upon the experience of the three users that posted to that thread (Drumology2001, echelonwebdesign, and sg2048) it is apparent there is room for other improvements within the system, such as update notifications. We’re taking into consideration their experience to determine how we can making WordPress hosted on a cPanel & WHM server a better experience for all.
Me:
To summarize, my argument is largely a pragmatic one.
You can’t prevent a user from updating the software outside of the cPanel Site Software Distribution Method (gosh, that’s a mouthful, I’ll call it the CPSSDM from here on out) — as they could always just use FTP to update the software.
From a software architecture perspective, this is problematic — and it would be far simpler to develop around it so that any software updates run — whether via FTP, a remote management tool (such as ManageWP, InfiniteWP, Jetpack Manage, iThemes Sync, etc), via the WordPress Dashboard, or an Automatic Update — successfully apprises the CPSSDM of the update.
Long story short, WordPress updating itself, and the CPSSDM managing updates shouldn’t be a conflict, they should behave in concert with one another, complementing each other in their behaviors, rather than stomping across the sandbox to kick over the other’s castle.
As mentioned above, I’d be delighted to volunteer my time and expertise to help the CPSSDM have an integration that doesn’t involve hacking core files and potentially leaving users running insecure software.
cPanel:
Thanks for the follow up and summary. We both agree there are improvements to be made, as with any software – it’s never ending. We will definitely reach out directly once your expertise is needed to make sure we’re providing the absolute best experience we can to both of our customers.
Thank you again, and keep that feedback coming!
So, in then end I don’t know where things are going from here. I know that a lot of users find it super convenient to use one click installs for WordPress, and I really hope that users who take the short-cut of one-click installs don’t wind up dead ended on an old insecure release because of some sort of server misconfiguration and hacked core files.
I’m also optimistic, because cPanel seems willing to take suggestions and input from the community on best practices. After all — let’s face it, when they’re providing one click installs for dozens of software projects, they’re not going to be able to work with each software project individually to make sure they’re doing it the best way possible. A lot is dependent on the communities reaching out and offering to help them do it the right way.
I look forward to hearing back from cPanel and seeing their integration done in a way that works well and plays nice with everyone else in the sandbox too.
After all, I think we all want sustainable, stable integrations — not fragile bits of code that will break if a user upgrades the wrong way. 🙂
I get the feeling, quite often, that frameworks get the short end of the stick in the popular mindset. You’ll often hear things like
“Yes, they’re useful for a beginner maybe, but if you’re a professional developer, you shouldn’t need them.”
“They’re sluggish.”
“They add bulk to a project.”
“You clearly haven’t optimized enough.”
Honestly, it’s a choice that needs to be made on a project by project base — both whether to use a framework, and how large of a framework to use. Regardless of these choices, it’s never a question of whether you’ve optimized your project — it is a question of what you’ve chosen to optimize your project for.
As a case study for this question, let’s look at: Do you want to use jQuery (or some other established project — Prototype, Backbone, Underscore, React, whatever) in your project or not?
Well, if you do use jQuery, it can smooth over a lot of browser inconsistencies (most of which have nothing to do with old IE versions), and give you a more reliable output. It can keep your code more readable, and more maintainable, as all of the browser fixes are bundled into jQuery itself! Keep your version of jQuery current, and many future browser inconsistencies or changes in how browsers handle things will be handled for you.
If you want a lighter weight webpage, and you’re trying to optimize for faster performance, you may prefer to use vanilla javascript instead. A friend of mine remarked on Twitter that he prefers to handle browser inconsistencies himself, because he can get faster performance:
@daljo628 Which I know about, code around, and still get 200% performance increases. @Microsoft @NebraskaCC
The downside of this is that by optimizing so heavily for performance, it can make it far more difficult to maintain your project down the road. When another developer picks up your project in a few months or a few years down the road, is the optimized code going to make sense? Are your ‘code around’-s still going to work, and has someone (you?) been actively maintaining it (and all your other disparate projects) to account for new browser issues that have cropped up since? If the application is expanded by a new developer, will they have the same level of experience as you, and properly handle cross-browser issues in the new code as well?
So, there’s always tradeoffs. The final judgement will often depend on the sort of project and the sort of client or company that the project is for.
If you’re launching an Enterprise-level website, a HTML5 Game, or something that will have an active team of developers behind it, you may well find that it’s worth doing something custom for it.
If you’re an agency building client sites that — once launched — may get someone looking at them every few months for further work or maintenance … jQuery probably makes a lot more sense. It will keep your code shorter and more readable, and if you keep jQuery up to date (which WordPress will do for you if you use its bundled version — and of course, keep WordPress updated) any future browser inconsistencies will be handled as well.
If you’re a freelancer or commercial theme/plugin vendor, using jQuery rather than something custom has always struck me as a common courtesy. By using an established, documented library, you’re leaving the codebase in an understandable and tidy state for the next developer who has to step in and figure out what’s going on in order to make modifications down the road.
So in the end, the answer is always going to be that it depends. The trade-offs that one project can make without a second thought may be inconceivable to thrust upon another.