Disclaimer: This is basically me stream-of-thought’ing things as I’m learning the Core Media Modal’s codebase. It’s my scratchpad, and I’m merely making it public in the hopes that it may be useful to someone else at some point in the future. Some things are probably very wrong. If I catch it, I’ll likely come back and edit it later to be less wrong. If you see me doing or saying something stupid, please leave a comment, so I can be less stupid. Thanks!
When exploring the code in WordPress, it looks like it’s best to do the investigating in the develop.svn.wordpress.org repository’s src directory (yes, develop.svn matches to core.trac — basically because legacy reasons and not wanting to change core.trac’s url when they changed core.svn over to be the Grunt’d version), before the build tools such as Grunt have a chance to run Browserify on it#. If you try to read through the code on the GitHub mirror, you’re gonna have a bad time, as that doesn’t have the `wp-includes/js/media/` directory with the source files in it.
Browserify is a slick little tool in Node that bundles up a bunch of files, and puts them into a single file, so you can `require()` them in JS. This makes them easier to work with in the source, and quicker to load in a browser. WordPress has been using it to compile the Javascript for media since 4.2 (#28510), when the great splittening happened. If this intrigues or confuses you, Scott Taylor has a great write-up on that ticket about the whys, hows, and whatnot. It originally merged in at [31373] halfway through the 4.2 cycle.
Okay, time to dig in. (So that I’m not inadvertently writing a book, I’m going to split this into a series — but if you’d like to read them all, I’m dropping them in a tag. You can find them all here.)
Having an API is well and good, but if there are no ways for third-party apps to actually authenticate and use the API, it’s not very useful.
Background
While the framework for the REST API was merged into WordPress Core with the 4.4 release, the only means of using any endpoints that currently require authentication are what is known as ‘cookie authentication’ — that is, piggybacking off of the authentication cookies (plus a nonce) that WordPress Core sets in the browser when you log in traditionally to your WordPress site. Unfortunately, that leaves the REST API as little more useful than the legacy `admin-ajax.php` file.
Fortunately, there are several authentication methods being worked on at the moment, in plugin form, for consideration of merging in to Core.
I’m heading up one of them, called Application Passwords. In short, it lets a user generate as many single-application passwords as desired — one for each phone, desktop app, or other integration desired — and later revoke any application password whenever desired without affecting other applications or the user’s normal authentication. The passwords are then passed with each request as Basic Authentication, encoded in the header of each request, as per RFC2617.
The other plugin is OAuth 1.0a authentication (spec). Most OAuth usage across the internet is actually OAuth 2.0 — however, OAuth 2.0 requires HTTPS on the server side. Ordinarily for most hosted services, this is not a problem. However, for a distributed platform like WordPress, this is untenable due to the majority of sites not supporting HTTPS. So an older, more complex specification — designed to not require HTTPS — had to be used.
For the record, I’m fully expecting to see an OAuth 2.0 plugin be built in the near future for use on sites that have a SSL certificate and support HTTPS. However, that may not be very useful for app developers that want a ‘build once, run everywhere’ authentication method that will always be available.
Now, this is a very interesting question, and it can lead to many more questions — such as if an Application Password shouldn’t be usable to create or delete other Application Passwords, whether they should be allowed to do other user-administrative tasks (providing the relevant user has those permissions). After all, if we’re preventing them from making a new Application Password, but they can just go and change the user’s actual password or email address, it’s a rather silly restriction.
So, there are several possiblilities.
First, you can just say “Any ways in to your account give full access to everything your account can do. Be careful what applications and websites you give access to.” — the most basic, relatively easy to understand way. Honestly, this is my preference.
Secondly, when creating a new Application Password or connecting a new client via oAuth, you could do something like … selecting what ‘role’ you’d like to give that connection. For example, if your normal user account as an Administrator, but you’re connecting an app that’s intended just for writing blog posts, you may want to downscale your role for that authentication key to either an Author or perhaps an Editor. An advantage to this is that it would be more cross-API — that is, it would work just as well with the legacy XML-RPC API, as with the new REST API.
This ‘role restriction’ method may be somewhat fragile, as it would need to only filter `current_user_can` and `user_can` — but only when checking the current user’s ID. However, that may goof up some cron tasks that may run on the same request as the REST API request or other incendtal things.
Thirdly, we could do something REST API specific — either whitelist or blacklist REST API endpoints based on authentication key. So, when either creating a new Application Password or authorizing a new OAuth connection, you would set rules as to what endpoints it can be used to hit. Perhaps you’d want to allow all `wp/v2` namespaced endpoints, but no endpoints added by plugins to custom namespaces. Or you want to only allow it to access core `posts` and `taxonomies` endpoints. Or even something like allowing it to access anything but `plugins`, `themes`, `users`, or `options` endpoints.
The downside of this is that it won’t work with the legacy XML-RPC API and the user interface for it would likely be far more confusing for users. It also could get problematic as permissions may vary for who can read `options` endpoints and who can write to them — or the like. So then it may further complicate to allowing GET requests but not POST, PUT, DELETE requests to certain endpoints.
Your Thoughts?
In the end, I’m not sure what the best path forward is. Maybe I’ve missed something. But I am confident that we need to start paying more attention to authentication and permissions for the forthcoming REST API. If you have any thoughts or remarks, please leave them in the comments.
I know my password. I still have my cell phone number that is set as a recovery number.
So how could I be locked out?
Several months back, I changed my cell carrier to Project Fi — the Google amalgamation of T-Mobile, Sprint, and WiFi networks to provide better coverage at lower cost. I’ve been thrilled with the service (like, seriously ecstatic), but there are some odd issues that have cropped up.
For one, it seems that text messages from SMS ‘short codes‘ don’t go through. This has been a known issue for some time now with Project Fi. I first found out about it when trying to set up Google Wallet with USAA — which runs through an automated SMS short codes system. Many automated texts do work without issue — Dropbox, for one.
So I just spent a full hour (I clocked it at the end) on the phone with a very empathetic and understand customer service rep from Apple. Unfortunately:
I don’t have any iOS or OSX devices currently logged into my iCloud account (I had just voluntarily switched my primary computer to Windows for work)
I can’t receive txt messages from their automated system at the phone number I have on file (despite the fact that I called them from that number and they can call me back at that same number)
I don’t have my recovery key (it was about three years back when I first turned on two-factor authentication, and I have absolutely no idea where I would have stored it)
So there is absolutely nothing that they can do for me, it seems.
I mean, I understand this to a point. The rep I had on the phone was very apologetic, but the system that they built just doesn’t account for the fact that perhaps sometimes phone numbers lose the ability to receive text messages.
They knew I was who I said I was — I was calling from my number on file, I had all my credit card information, I could authenticate the first step of logging in. They could even call me at the phone number they have on file. But because they couldn’t text me, they couldn’t — not wouldn’t, but actually couldn’t — help me.
But this isn’t meant to be a sob story, or a tirade against Apple. Okay, maybe a little bit of a gripe, but I’d like it to be more a focus on the importance of considering edge cases in development.
The customer support reps are incredibly constrained. They knew without a doubt that I could receive calls at the phone number in question. But they weren’t empowered to do anything about it. They escalated the issue, and it seems no-one was about to do anything, apart from offering their condolences that I won’t be able to log into my account.
If there is one take-away from this, I suppose it’s to enable your customer support reps to actually do their jobs. I know Apple has gotten burned in the past on hackers gaming the system, but it’s the importance of being judicious when dealing with requests, not barring the doors against any.
—
As a semi-related note, I’m heading up the group working on bringing Two-Factor support to WordPress core. Two-Factor is something that I believe in deeply, I just also believe in the importance of carefully building out the systems that serve as back ends to such methods of authentication.
— george stephanis toots on mastodon (@daljo628) May 11, 2015
After I published this, I had someone from cPanel reach out to have a more in depth conversation than it’s really possible to manage in a medium that caps you at 140 characters.
In the interest of transparency and context — as well as showing cPanel’s efforts thus far in working to fix things, here’s the conversation that transpired on that ticket — #6489755 on their internal ticketing system. Any modifications on my part are purely for formatting, as well as omitting names of customer support folks.
cPanel:
Are you using the cPAddons tool within the cPanel interface to install & manage WordPress? If so, then yes, we disable the auto-update functionality within the application so the updates can be managed from the cPanel interface itself. The way our cPAddons tool tracks software is not compatible with the way WordPress updates, hence why we disable the auto-updates so we can track it through cPAddons.
If you’re not using the cPAddons tool to install/manage WordPress and have concerns of us modifying the core of the application, please let me know.
Regards,
—
*********** ********
Technical Support Manager
cPanel Inc.
Me:
I’m a Core Contributor to WordPress, not a cPanel User. I was speaking up on Twitter because I learned through some Forum threads that y’all were doing some very problematic things — which I’m hoping to address here.
Just to make sure we’re talking about the same thing, the three changes that I’m aware of are specifically noted are:
return true; // Force this functionality to disabled because it is incompatible with cPAddons.
(Please note that all my code references to WordPress core are aimed at the latest revision of the `master` branch on GitHub)
It looks like when you’re hacking core, you’re turning off not merely Automatic Updates (as you suggested prior), but all WordPress Updates as a whole. This is a Very Bad Thing. If you were merely disabling Automatic Updates, but still leaving the user with the ability to use WordPress’s very well established upgrade system, that would be something else entirely — and in fact is documented extensively here: https://make.wordpress.org/core/2013/10/25/the-definitive-guide-to-disabling-auto-updates-in-wordpress-3-7/
— and can be done by adding a single line to your generated wp-config.php file when installing WordPress:
define( 'WP_AUTO_UPDATE_CORE', false ); # Disables all automatic core updates:
Why do you feel the need to fully disable all updates from within WordPress and force users to use either cPanel or FTP exclusively to upgrade WordPress? Why can’t they work in conjunction with one another?
Clearly, users have been dismayed and shocked when their installs haven’t been notified of security point releases that are available as y’all have killed the `get_core_updates()` function. Many don’t even realize they may need to go into cPanel to upgrade their WordPress install, and so their installation is left at an outdated, insecure version that is incredibly vulnerable to exploit.
cPanel:
Thanks for the followup. The WordPress management through cPAddons is quite old, and very well may have been in place prior to having define( 'WP_AUTO_UPDATE_CORE', false ); within the WordPress application. I’m uncertain of that as I do not know when WP introduced that function but from Googling it the oldest result I can find is from 2012.
That said, cPanel does in fact do a few things with cPAddons in regards to customers who have out dated versions:
Whenever WP releases a maintenance build that addresses security concerns, we react very quickly to get our software updated to be available to customers.
By default, we define that software managed/installed through cPAddons is automatically updated when a new update is available.
Based on the above information, if the server administrator leaves the defaults enabled, once WP introduced a maintenance releases that corrects security concerns and we’ve tested and updated our source for it, customers will receive the release automatically.
If the server administrator decides to disable automatic software updates, the end user and systems administrator will still receive notifications that their installation is out of date accompanied with steps on how to update their application.
With that, I can definitely appreciate the concern for making it as easy and automated as possible for users to get updates for their WordPress, there’s definitely more to the situation that solely disabling WordPress’ automated updates in the core.
I’ve submitted a case (#188545) with our developers to have the logic for disabling updates changed from it’s current behavior, to using define( 'WP_AUTO_UPDATE_CORE', false );.
If you have any other questions, feedback, or concerns, please don’t hesitate to let me know.
Me:
By default, we define that software managed/installed through cPAddons is automatically updated when a new update is available.
Clearly, that’s not the case in practice. As the user who discovered this remarked:
In my audit, it appears that — of 37 total WP-based websites on our server — we have 14 that have not updated to the latest version of WordPress. Of those 14, the oldest version is 3.9 (which 3 of 14 are running), and the newest is 4.1 (which 4 of the 14 are running).
If cPanel wants to manually update sites to current releases, I’m fully, 100% in favor of that. It’s a solid step to a safer, more reliable web.
My issue is that y’all are preventing users from updating themselves via the existing WordPress infrastructure. There’s really no reason for the blocking of an existing stable upgrade system. If you just need some way for cPanel to be notified on core upgrades, it’s relatively trivial to set up a perhaps 20 line function that will notify cPanel when it happens — and that will even account for users manually updating their WordPress installation via FTP, which your current version seems as though it would break during.
Here’s a proof of concept:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Also — I’m not a lawyer, but distributing a forked version of WordPress and still calling it WordPress (rather than cPanelPress or something) may be a trademark violation. Forking is 100% fine under the GPL for copyright on the code, but may be problematic from a trademark perspective — as what you’re distributing isn’t actually WordPress, but rather a hacked up version. If that makes sense?
cPanel:
Thank you for contacting us regarding people’s experience using WordPress as distributed via our Site Software feature. This feature is a method of installing and managing various third party applications. Applications installed via Site Software are intended to be managed entirely within Site Software, thus in-application updaters are disabled. Allowing the in-application update to proceed will cause a conflict between the updated application and the Site Software, which can easily result in confusion. From this perspective what we are doing is no different from Debian and other Linux distros that distribute applications with in-application updaters.
We generally release the latest version of WordPress within 1 to 5 days of the latest WordPress update. At minimum server administrators are informed each night of all Site Software applications that need updated. It is up to user’s to configure their notifications within cPanel to receive such updates.
Within the Site Software user interface, users are able to upgrade all applications that are out of date. In the admin interface, a server admin can choose to upgrade all Site Software applications on the entire server.
Based upon what the Drumology2001 user reported on the forum it appears something is amiss on that server. We’d love to examine that server to determine why WordPress updates were not available to the user. Based upon the fuzzy dates used on the forum, and compared with our internal records, the 4.1.1 update was available to the Site Software system prior to the initial post. We’ll reach out to him to determine whether there is anything we can do there.
One of our concerns is that in-application updaters are incompatible with the Site Software distribution method. There are various things that could happen due to updating a Site Software managed application outside of Site Software. At minimum it means that from that point onward the server admin, and potentially the user, will be informed of a software update that is no longer needed. At worst someone will force an update that results in corrupting the installed application.
Handling those in a way that reduces frustration for everyone, and keeps support costs down is important to us.
Based upon the experience of the three users that posted to that thread (Drumology2001, echelonwebdesign, and sg2048) it is apparent there is room for other improvements within the system, such as update notifications. We’re taking into consideration their experience to determine how we can making WordPress hosted on a cPanel & WHM server a better experience for all.
Me:
To summarize, my argument is largely a pragmatic one.
You can’t prevent a user from updating the software outside of the cPanel Site Software Distribution Method (gosh, that’s a mouthful, I’ll call it the CPSSDM from here on out) — as they could always just use FTP to update the software.
From a software architecture perspective, this is problematic — and it would be far simpler to develop around it so that any software updates run — whether via FTP, a remote management tool (such as ManageWP, InfiniteWP, Jetpack Manage, iThemes Sync, etc), via the WordPress Dashboard, or an Automatic Update — successfully apprises the CPSSDM of the update.
Long story short, WordPress updating itself, and the CPSSDM managing updates shouldn’t be a conflict, they should behave in concert with one another, complementing each other in their behaviors, rather than stomping across the sandbox to kick over the other’s castle.
As mentioned above, I’d be delighted to volunteer my time and expertise to help the CPSSDM have an integration that doesn’t involve hacking core files and potentially leaving users running insecure software.
cPanel:
Thanks for the follow up and summary. We both agree there are improvements to be made, as with any software – it’s never ending. We will definitely reach out directly once your expertise is needed to make sure we’re providing the absolute best experience we can to both of our customers.
Thank you again, and keep that feedback coming!
So, in then end I don’t know where things are going from here. I know that a lot of users find it super convenient to use one click installs for WordPress, and I really hope that users who take the short-cut of one-click installs don’t wind up dead ended on an old insecure release because of some sort of server misconfiguration and hacked core files.
I’m also optimistic, because cPanel seems willing to take suggestions and input from the community on best practices. After all — let’s face it, when they’re providing one click installs for dozens of software projects, they’re not going to be able to work with each software project individually to make sure they’re doing it the best way possible. A lot is dependent on the communities reaching out and offering to help them do it the right way.
I look forward to hearing back from cPanel and seeing their integration done in a way that works well and plays nice with everyone else in the sandbox too.
After all, I think we all want sustainable, stable integrations — not fragile bits of code that will break if a user upgrades the wrong way. 🙂
I get the feeling, quite often, that frameworks get the short end of the stick in the popular mindset. You’ll often hear things like
“Yes, they’re useful for a beginner maybe, but if you’re a professional developer, you shouldn’t need them.”
“They’re sluggish.”
“They add bulk to a project.”
“You clearly haven’t optimized enough.”
Honestly, it’s a choice that needs to be made on a project by project base — both whether to use a framework, and how large of a framework to use. Regardless of these choices, it’s never a question of whether you’ve optimized your project — it is a question of what you’ve chosen to optimize your project for.
As a case study for this question, let’s look at: Do you want to use jQuery (or some other established project — Prototype, Backbone, Underscore, React, whatever) in your project or not?
Well, if you do use jQuery, it can smooth over a lot of browser inconsistencies (most of which have nothing to do with old IE versions), and give you a more reliable output. It can keep your code more readable, and more maintainable, as all of the browser fixes are bundled into jQuery itself! Keep your version of jQuery current, and many future browser inconsistencies or changes in how browsers handle things will be handled for you.
If you want a lighter weight webpage, and you’re trying to optimize for faster performance, you may prefer to use vanilla javascript instead. A friend of mine remarked on Twitter that he prefers to handle browser inconsistencies himself, because he can get faster performance:
@daljo628 Which I know about, code around, and still get 200% performance increases. @Microsoft @NebraskaCC
The downside of this is that by optimizing so heavily for performance, it can make it far more difficult to maintain your project down the road. When another developer picks up your project in a few months or a few years down the road, is the optimized code going to make sense? Are your ‘code around’-s still going to work, and has someone (you?) been actively maintaining it (and all your other disparate projects) to account for new browser issues that have cropped up since? If the application is expanded by a new developer, will they have the same level of experience as you, and properly handle cross-browser issues in the new code as well?
So, there’s always tradeoffs. The final judgement will often depend on the sort of project and the sort of client or company that the project is for.
If you’re launching an Enterprise-level website, a HTML5 Game, or something that will have an active team of developers behind it, you may well find that it’s worth doing something custom for it.
If you’re an agency building client sites that — once launched — may get someone looking at them every few months for further work or maintenance … jQuery probably makes a lot more sense. It will keep your code shorter and more readable, and if you keep jQuery up to date (which WordPress will do for you if you use its bundled version — and of course, keep WordPress updated) any future browser inconsistencies will be handled as well.
If you’re a freelancer or commercial theme/plugin vendor, using jQuery rather than something custom has always struck me as a common courtesy. By using an established, documented library, you’re leaving the codebase in an understandable and tidy state for the next developer who has to step in and figure out what’s going on in order to make modifications down the road.
So in the end, the answer is always going to be that it depends. The trade-offs that one project can make without a second thought may be inconceivable to thrust upon another.
In keeping with a previous post I’d made a couple months ago explaining the oft-discussed rationale of why we do things the way we do with Jetpack, I’ll be doing it again today, on a different — but related — topic.
I may as well make a series of it.
This is the first of two posts (in theory, I’ll remember to write the second) explaining why Jetpack is a big plugin with many features, rather than many individual plugins. This post will be looking at the primary technical reason. The abundance of other reasons will be in the subsequent post. (So please don’t read this post and think it’s the only reason — it’s not)
tl;dr: Dependency management sucks.
Jetpack, as you may be aware, is structured as a bunch of modules. Many — but not all — require a connection to WordPress.com to function. This isn’t for vanity purposes, it’s because they actually leverage the WordPress.com server infrastructure to do things harder, better, faster, stronger than a $5/month shared host is capable of. To do that, they need to be able to communicate securely with WordPress.com, and WordPress.com must be able to communicate securely back to your site.
Some of the modules that require a connection are things such as Publicize (which uses the WordPress.com API keys to publicize to assorted third-party systems, rather than making users register various developer accounts and get their own API keys), Related Posts (which syncs some content up to the WordPress.com servers and indexes it on a large ElasticSearch index more efficiently and accurately than could be done in a MySQL database), Monitor (which pings your site every five minutes and emails you if it’s down), Comments (which passes data back and forth behind the scenes to enable secure third-party comment authentication) — you get the idea.
We could bundle the connection library with each individual plugin. However, we’d need to make sure it was namespaced correctly so each different plugin can use its own correctly versioned instance of the connection classes. Which would then mean a user could have well over a dozen copies and different versions of the same connection class active at a given time. Which will make things more difficult with respect to developing the plugins, as you can’t assume methods in one are necessarily in another. And when you make a change in the master class, you need to scan each repository to make sure you’re not breaking anything there, and keep changes synced to well over a dozen repositories. But I digress.
To avoid duplicate code, the modules that depend on talking back and forth with WordPress.com all use a common library that handles signing and verifying requests, API calls, and the like.
Because it’s all packaged in a single plugin, we can be sure that it’s all running the required version. If Publicize needs a change in the core connection library, we can be sure that the version of the connection library in Jetpack has those changes. If the core connection library needs to change structure, we can make sure that any modules that used the old methods are updated to run the new ones instead. Everything is maintained so that it’s running smoothly and works properly with each other.
Now, if Likes, Single Sign On, After the Deadline, Post by Email and others were their own plugins, and connected to a separate Jetpack Core plugin, versioning gets tricky. It could work, in theory, if every plugin is kept up to date, always and forever. But the instant that the user is using, say, an outdated version of Subscriptions with an outdated Jetpack Core (which work perfectly together), and then installs the up-to-date WP.me Shortlinks plugin, things could break because WP.me Shortlinks expects a more up-to-date Jetpack Core. So you go ahead and update Jetpack Core to current, but now Subscriptions — which used to work perfectly — now breaks because there was a method change in Jetpack Core, that is fixed in the up-to-date version of Subscriptions, but the user isn’t running the up-to-date version. Horrible UX.
Plus, if the user doesn’t have any Jetpack stuff, the installation flow for their first Jetpack Plugin that needs the core would be something like this:
Install Stats.
Activate Stats.
Get error saying you need Jetpack Core for Stats to function.
As I said, dependency management is hard, and there’s not really a good way to manage it in WordPress. There have been some very worthwhile attempts made, but none that can have a sufficiently solid user experience for an average user to compare with our current system and flow.
Any questions or suggestions about dependency management and Jetpack? Ask away!
Simpler, right? It reads more easily, and as an added bonus, if something is to toggle RTL after you’ve registered the path to the asset, it handles it gracefully! As it doesn’t determine which asset path to serve up until it’s actually outputting the tag.
Now, this is assuming that your rtl stylesheet is just a replacement for your normal stylesheet. Which most are — it could be automatically generated with some tool like CSSJanus or CSS-Flip. But if you’ve got an add-on css file, that you want to load in addition that just contains overrides for RTL languages, you can handle that just as easily!
Detailed explanation (with bonus examples for handling minified versions of both regular and rtl css as well):
/**
* If you're supplying a pre-minified version of the stylesheet, you'll
* need this, and to add the `suffix` data, so that core knows to
* replace `example-min.css` with `example-rtl-min.css` -- handling
* the suffix properly.
*/
$min = ( defined( 'SCRIPT_DEBUG' ) && SCRIPT_DEBUG ) ? '' : '.min';
/**
* The normal registration. You're familiar with this already.
*/
wp_register_style( 'example', plugins_url( "css/example{$min}.css", __FILE__ ), array(), '1.0' );
/**
* I set the value to 'replace', so it will replace the normal css file if rtl,
* but it could also be 'addon' for a css file that just gets enqueued as
* well, rather than replacing the normal one.
*/
wp_style_add_data( 'example', 'rtl', 'replace' );
/**
* Finally, if we are replacing the existing file, and there's some sort of
* suffix like `-min` as mentioned earlier, we need to let core know about
* it, so that it can keep that suffix after the added `-rtl` that it's adding to
* the path.
*/
wp_style_add_data( 'example', 'suffix', $min );
/**
* Then we just enqueue it as we would normally! If it's going to always
* be enqueued regardless, we could just call `wp_enqueue_style()` rather
* than `wp_register_style()` above.
*/
wp_enqueue_style( 'example' );
Hopefully, this is the last time that I’ll have to answer this question.
Frankly, it’s been answered dozens of times before. Now, I’m hoping to use this as a canonical ‘Answer Link’ that I can refer people to. I’ll keep up with comments, so if anyone would like to ask
So… Jetpack is back to its old ways of auto-activating modules huh? #suck
Well, to start off, I should probably clarify what we currently do on this. We don’t auto-activate every new module that comes in.
We never auto-activate features that affect the display or front-end of your site — or at least not unless a site administrator explicitly configures them to.
So, for example, something like Photon, which would swap all your content images to CDN-hosted versions, doesn’t auto-activate. Our comments system doesn’t auto-activate either, as that would swap out your native comment form. Our sharing buttons do, but they don’t display unless you take the time to drag down some sharing buttons to the output box under Settings > Sharing.
However, modules like Publicize, Widget Visibility, and the like — they just give you new tools that you can use, with no risk to affecting your everyday visitors. When users upgrade, we give them a notification of what just happened, and point out some new features we’ve built in that they may want to activate themselves.
One thing we’ve recently expanded on, perhaps six months ago, is a ‘plugin duplication list’, for lack of a better phrase. These aren’t plugins that have an actual code-based conflict with a module, they’re ones that may be … duplicating effort. Previously, we were just scanning for plugins that would output OG Meta Tags, and short-circuit our own provider. However, since Jetpack 2.6, which shipped in November 2013, we’re actually doing it via a filter for all modules. For example, if you’ve got Gravity Forms or Contact Form 7 installed and active, our internal Jetpack Contact Form won’t auto-activate. If you’ve got AddThis or ShareThis active, our sharing buttons module won’t even kick in.
Now, obviously, we can’t catch every single plugin that may be similar enough to one of our modules to give cause to negate auto-activation. So there’s a filter, `jetpack_get_default_modules`, that can be used in any plugin to cancel auto-activation on any module.
We’re going to continue using our discretion to auto-activate select modules by default, but if you’d like to turn it off permanently for yours or a client’s site, we’ve made it ridiculously easy to do.
We believe that judiciously enabling new features is a win for users, especially considering 1) how low-impact most features are when ‘active’ but not actually implemented by a site owner, 2) how awkward it is for a site owner to have to enable something twice — for example, enabling the Custom Post Formats bit, and then having to visit Settings > Writing in order to actually enable the Portfolio custom post type.
If you have clients, that you’d like to be active in the relationship with, and customize the Jetpack experience for — that’s terrific. You’re the type of people that we add bunches of filters for. We’re all about empowering you to override our decisions, we just prefer to keep the default user interface free of a thousand toggles.
If you right-click and inspect the element, the URL is just what you expected! If you right-click and open in a new tab — same thing! But if you click normally and let it trigger a Javascript event, it modifies the link before your browser actually processes it.
After you’ve clicked on it normally once, you can come back and re-inspect it, to see that the URL on the link has now changed to the one with the referer data on it — they’re rewriting it inline and intentionally delaying it so when you first click, you wouldn’t realize that the data was being appended.
This can be a problem because some sites employ concealers for the referer http header (No, I didn’t misspell referrer) like href.li for example. By embedding this in a get parameter forcibly, it’s leaking data in a way very difficult to block, by taking advantage of the trust offered via accepting Twitter as an oEmbed provider.
Even if it’s an entirely trivial matter, it’s still forcing you to do something.
And goshdarnit, I’m lazy.
I also prefer front-loading effort when possible. Ounce of prevention, pound of cure, stitch in time saving nine, and all that.
And I respect other people’s time as much as I value my own. So when I build something, I try to avoid decision points whenever possible. This results in the loss of options occasionally, but I believe a smoother user flow.
Now, occasionally power-users will want to modify functionality. Adding a decision point for all users for the sake of the minority is silly, especially when power-users can leverage other methods — filters, actions, functionality plugins that extend the first plugin — to accomplish their goals.
To each according to their needs. Typical users need a simple, smooth, classy interface. Power users need to get under the hood. Why try to make something that doesn’t work well for either by trying to serve both?
The best middle ground I’ve been able to come up with is offering a secondary ‘under the hood’ plugin that exposes a lot of filters as options. Keep it canonical and clean, but present all the options.