I’ve recently started building a new web app and have decided to build it in Symfony (and hopefully ultimately Vue for the front-end), as a way of learning what looks like a really cool framework. I’d had an introduction to Symfony through work, then did some online courses, including a load of the excellent SymfonyCasts, so have a reasonable amount of knowledge now on the subject.
One cool feature of Symfony is its form handling abilities. Using simple annotations and generated code, you can get alot of functionality around submitting, validating and saving your form data for very little effort. When rendering a form to the user, Symfony’s template handler, Twig, has a load of built in functions to render each field, such as form_widget to display the field’s HTML tag, form_label to display a field’s label, form_row to combine the above 2 and 2 more, etc. These can be overwritten, so you can style/theme your form using your own CSS, while keeping Symfony and Twig’s powerful form functionality. More details can be found on Symfony’s form customization page.
One thing I found was that sometimes you don’t really need to modify the original form_widget, you just need to say wrap its contents in a div with a specific class. So, ideally there’d be a way to overwrite form_widget but still call the parent/original one. Luckily there is, but it was a little tricky to figure out.
Let’s say I need to add a <div class="select"> around a particular field. For this, we can overwrite the choice_widget function, which generates <select> tags. In your form theme you can do something like:
And you’d think that would work no issues! However, I kept getting errors along the lines of “Can’t call parent function” or “parent function does not exist”, or something like that. I found it surprising, because you can simply call choice_widget before you add your code to overwrite it, so the function must be in the global scope, right? Well, somehow it isn’t! Luckily the fix is simple: import the base Twig theme at the top of your own theme:
1
{% use 'form_div_layout.html.twig' %}
And voilá! You should now have your select wrapped in its stlystyled div.
This article is about how, at Smartbox, we improved our release process over a period of around 2 years, going from ad hoc releases, with little structure and performed outside of business hours, to having fully managed processes and releasing during the day, only reverting to releasing outside of office hours if absolutely necessary.
When I joined, we had 2 teams doing releases of our public facing e-commerce website, each containing between 7 and 10 developers and testers, reporting to a Web Manager. One team looked after the e-commerce site, while the other took care of people who received one of our boxes as a gift. Now, there are around 8 different teams who could potentially release to the public website, as well as other teams releasing various ancillary micro-services.
Chaos
This story begins around 2015, when I joined Smartbox. At that time, we were a much smaller organization (~280 people in total, vs. 600+ today). The teams working on the platform worked in 2–3 week sprints and would do a release at the end of each.
Process
A team would get in touch with the Web Manager as they approached the end of a sprint and had a release candidate ready to go out. There was rarely any scheduling conflicts, so he would just say ‘OK, go live on Wednesday’ or similar. The release process would start going to pre-production during the day, followed by production at around 10 PM (everyone working remotely from home). When ready to begin on production, we would put the site behind a maintenance page, start the deploy (including any extra required steps), kick off the regressions and do manual UAT on production. To finish up by midnight was a rarity, but usually, we would be done by 2 AM, at which stage we would remove the maintenance page and go to bed.
It should be obvious that there were a number of issues with this:
When there were issues and the release dragged on, people would get tired and ‘just want to finish’
After the release was deployed, everyone would just go to bed and no further monitoring took place, which could lead to nasty surprises for your colleagues the next morning
There wasn’t a full set of engineers and management for support during the release if there was an issue
Why duplicate the UAT effort, when it was already done twice on a project and pre-production environment
On top of all this, there was no record of a release. The codebase is versioned with git tags but there was no centralized list detailing what was in each release, what team did it, issues encountered, etc.
Release Plan
Each release would have (and still has) a release manager, to coordinate all the steps involved in deploying code. In preparation for a release, the manager would fill out an Excel spreadsheet of all the steps for pre-production and production; this was the Release Plan. Often certain tasks need to be carried out on the production server, and these would be done by an infrastructure engineer, so the Release Plan would have the extra info for them. This plan would just be emailed/shared via chat to whoever needed it and would essentially then be lost forever after the release. This also made it hard for a new release manager to come along, as they had no frame of reference for the Release Plan
A sample Release Plan from 2015. Note the Method of Procedure tab — there would be extra info in here, which required flipping between this tab and the Release Plan tab — awkward! Also, the list of tickets went into Release Notes — a manual copy and paste effort
While we’ve always had a suite of unit tests, it was up to the developer to run them locally and ensure nothing had broken. However, sadly the tests weren’t always run and there were instances where we would release code with a simple unit test bug in it.
We also had no way of tracking database changes, or what state the database was in. In Magento (which is what our e-commerce platform is based on), to do a DB change, you write a script called an installer. The installers are versioned in Magento, so it’s possible to tell what state the DB is in by looking at the current version of each installer. Often, when deploying, either the installer wouldn’t run, or there would be a DB refresh on pre-production and various other issues. This resulted in a lot of lost time trying to figure out why various functionalities were broken. We had no way of definitively and easily saying ‘this is what the database should look like’ after a deploy.
Summary
Infrequent, nighttime releases
Nothing was tracked or centralized
Buggy code got released
Improvements
The company knew it was about to expand its workforce massively over the next few years, since it was acquiring competitors and had big plans to build a brand new back-office infrastructure. More development teams was always going to result in more releases, so it was pretty evident that we were going to need a new process whereby there could be a release every day or even multiple releases on the same day.
Additionally, not all these teams would be working on the same codebase. This enabled a relaxing of the restriction that one team could release per day. However, we still needed more control over who released what and when
Process
We started by having a weekly meeting on Fridays, where the Manager or Tech Lead of each team looking to release the following week would attend, explain what they were releasing and when they wanted to do so. The meeting was coordinated by the ‘gatekeeper’, although that phrase never really caught on! It was all very analogue and manual, involving hand-drawn calendars, lots of (amicable) discussion and the gatekeeper keeping track of everything. When everything was decided, an email would be sent out with the plan for the following week.
Another improvement we made at this time was to begin releasing during the day. We realized the maintenance wall was overkill for most releases, especially ones that weren’t changing the structure of the databases. We also reduced a lot of the required UAT, since it had already been done on a different environment, so it was a pointless duplication of effort.
Move to gitlab, Continuous Integration/Continuous Deployment
It was at this point also when we moved from doing everything in Git via the command line to having our entire codebase hosted in gitlab. This meant building a release candidate would be as simple as clicking a ‘Merge’ button for each ticket in your upcoming release. Other tasks like merging to master, creating a tag, resolving conflicts could all be done via clicks of the mouse.
Moving to gitlab also enabled us to begin initial attempts at CI/CD. As mentioned above, often a developer would commit PHP code on a feature branch that broke a PHP unit test. To alleviate this, we built pipelines in gitlab so that when a branch was pushed, we would run the unit tests in a docker container and only when the pipelines were successful could a feature branch be merged into a release branch.
gitlab integration: only merge after the tests have passed
Once we had that pipeline in place, we were able to add other pipelines, e.g. to generate our zipped up Javascript app.min.js file, a PHP Composer run and we even started to work on a front end unit test suite using Karma and PhantomJS.
Release Plan
Around this time, we moved away from our in-house wiki and started using Atlassian’s Confluence instead. This allowed us to create a Release Plan Template, which could be used as a basis for all Release Plans. In this template, we added every single conceivable step that could be requested during a release, with all the extra information in the one document. So, when someone started doing a release, all they would have to do is create a new file using the template and generally remove steps. Obviously, anything very specific to that person’s release could also be added in as appropriate.
Using these Confluence templates also meant that we now had a single source for all release plans and they could be shared with a URL, worked on at the same time and kept up-to-date.
Easily create a new Release Plan using this template
Summary
Daily releases, weekly release planning meeting
Use of gitlab, pipelines for verification and asset generation
Standardized and centralized release plans
Sanity
Process
As we continued to grow, the company hired a Change Manager, with a proper background in change management. This Change Manager is still in place today and closely follows all releases throughout the company, to make sure they’re progressing, there are no clashes and that everything stays organized. Several chat rooms around release coordination, production issues and the like were created, which helps people collaborate on who’s doing what and when.
We started using a system called Service Desk to track all changes, or Production Change Requests (PCR) as they’re called. An advantage here is that a record of ALL changes are maintained, with issues and resolutions attached, so it’s very easy to go back and see what happened, and most importantly: what was the solution, if an issue re-occurs.
Finally, we introduced a morning standup, called the Change Advisory Board (CAB) where people talk about what they’re hoping to do over the current and following day, as well as discussing any on-going production issues.
So, we’ve gone from ad hoc releases to having the following well-established process:
Build your Release Plan
Outline your change in a PCR on day-2 (or earlier!)
Go to the CAB on day-1
Announce you’re ready to release in a chatroom on your go-live day
Start releasing when you’ve got confirmation it’s OK to proceed
If one does encounter issues while releasing, these are also now tracked in the Release Plan, along with any corresponding tickets that are raised for other teams to fix. Every Monday morning, there is a meeting to discuss issues teams faced the previous week and to ensure these are being resolved by the Operations team. This ensures we don’t keep facing the same problems (repeat offenders) when releasing and that problems do actually get fixed.
Release Plan
Not much has changed in the Release Plan. The overall structure has evolved into different sections, we track the timings for each step, as well as issues encountered, as mentioned above. We also include results from automated tests, which helps to see if an issue has occurred before.
Summary
Have a dedicated change manager
Centralize and track ALL changes
Communicate everything you’re doing in a standard way
Dreaming?
Ultimately we would like to move to a true Continuous Integration set-up, whereby when you finish a ticket, you simply merge to master and everything from there is automatic. We would move away from having Production and Pre-Production servers, to having a Blue/Green set-up, where both are Production-ready and it’s simple to flip between the 2. The release process would then consist of a developer doing the following:
Merge branch to master
This kicks off running the unit tests
Deploy master to ‘blue’ server on success
Run the regressions
Flip ‘blue’ and ‘green’ servers on success, so blue is now serving the code and includes the branch just merged
We’ve put a certain amount of this in place, with gitlab and the pipelines, but we’ve some way to go before we achieve this dream scenario.
TL;DR even the summaries
Centralize and track ALL changes!
Automate as much as possible (pipelines, unit testing)
Communicate in a standard way (i.e. have a fixed place/process to announce what you’re doing)
A general rule I follow when using KnockoutJS is that there should be no DOM manipulation in the viewModel. The viewModel should be completely independent of any markup, while any changes to the DOM (via jQuery or otherwise) should be handled in the binding handler. This makes your viewModels much more portable and testable.
As I’m sure you’re aware if you’re reading this article(!), KnockoutJS’s binding handlers are applied to an element and have init and update functions that get called when a certain value in your viewModel changes. Within your init function, you can set up various DOM-element-specific jQuery handlers, while within your update function you can perform various DOM manipulations, trigger events etc., as well as reading from/updating your viewModel and much more.
A common situation I’ve come across a number of times is: say you have a big div with plenty of buttons and links that are tied into external jQuery plugins and DOM elements and you want to perform certain actions when they’re clicked or when other changes happen in your viewModel. You don’t really want to have loads of binding handlers for each separate change that might happen in your viewModel, your codebase could get quite big quite quickly. What I’m about to propose is a structure of how to apply 1 binding handler to the entire div, then call various functions to manipulate the DOM outside of your update binding handler function, via the viewModel.
So, I’ll start with the viewModel. I’m going to have an observable action attribute and 2 functions linkClicked and buttonClicked. (Please bear in mind, this is a very simple example for illustration purposes, you wouldn’t really call viewModel functions linkClicked etc.!) There’ll also be a resetAction function, which will be explained shortly.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
exampleViewModel =(function($){ var viewModel =function(){ this.action= ko.observable(''); };
viewModel.prototype.resetAction=function(){ this.action(''); };
viewModel.prototype.linkClicked=function(){ this.action('jQLinkClicked');// prepended "jQ" to the function name to help the reader later };
viewModel.prototype.buttonClicked=function(){ this.action('jQButtonClicked'); }; //JS Module Pattern return viewModel; }(jQuery));
So now we can see that whenever we click either the link or the button, our action attribute will be updated and thus trigger the update function in the exampleBindingHandler binding handler that’s applied to the div. Let’s look at that binding handler now:
ko.bindingHandlers.exampleBindingHandler={
init:function(element, valueAccessor, allBindingsAccessor, viewModel){ // do whatever initial set up you need to do here, e.g.
$('body').addClass('whatever'); },
update:function(element, valueAccessor, allBindingsAccessor, viewModel){ // so this will be called whenever our observable 'action' changes
// get the value var action = valueAccessor(); // reset to empty
viewModel.resetAction();
So you can see from the above how we can move from various different viewModel changes out to the binding handler and maniuplate the DOM in there. We read and save action from the valueAccessor, then reset it via the viewModel’s resetAction function, just to keep things clean.
At this point we have very simple alerts for each of our actions but of course in real life you’ll want to call your jQuery plugins, change the DOM etc. To keep things clean what we can do is have a simple JSON object with functions for each of the actions and within those functions do our heavy jQuery lifting, something along the lines of:
var _ ={
jQLinkClicked:function(){ // e.g.
$('.class').parent().remove(); },
jQButtonClicked:function(){ // e.g.
$.plugin.foo(); } }
ko.bindingHandlers.exampleBindingHandler={
init:function(element, valueAccessor, allBindingsAccessor, viewModel){ // do whatever initial set up you need to do here, e.g.
$('body').addClass('whatever'); },
update:function(element, valueAccessor, allBindingsAccessor, viewModel){ // so this will be called whenever our observable 'action' changes
// get the value var action = valueAccessor(); // reset to empty
viewModel.resetAction();
In my current job, we use Foundation for stuff like modal popups, fancy drop downs etc. I haven’t used it too much but I know for the modal dialogs you can either instantiate them via Javascript ($('#elem').foundation('reveal', 'open', {option: 'value'});) or via HTML attributes (<a href="#" data-reveal-id="elem">Open</a> and <div id="elem" data-reveal>).
Passing options to Foundation via Javascript is pretty trivial, as can be viewed in the example above. However, doing this via HTML attributes isn’t so straight-forward and I found the documentation online pretty hard to find. Luckily I was able to figure it out and it’s simple enough: you add a data-reveal-init attribute and a data-options attribute on your modal div. Each of the options are separated by semi-colons and are of the format option: value, e.g.
Recently I started a new job at a company that is looking to transition away from a customised, unstructured, jQuery module set up to use KnockoutJS and RequireJS for it’s modules. This approach was chosen because the core platform is based on Magento and the forthcoming Megento 2 uses KnockoutJS heavily throughout it’s frontend templates. As a good starting point and proof of concept, we decided to look at converting our existing custom-autocomplete module from a combination of EJS and jQuery to pure KnockoutJS. Luckily for me, I was the one who got to implement it, and thus learn a new skill!
I’m not going to go into the ins and outs of how KnockoutJS works but in short it’s a MVVM system, where you have Models, Views and ViewModels, the latter being the interface between the other 2, the client and the server. This autocomplete was a standard input field, whereby on typing 3 characters, an AJAX call is made to the server looking for strings that matched the search string and displayed a clickable list of results underneath the input field. Additionally, you could use the arrow keys to select items in the menu, as well as the mouse. We also have different instances of the autocomplete, to search for different types of entities (e.g. searching for a product vs. search for a place), so we need the code to work with each.
From this point on I’m going to assume at least a basic knowledge of KnockoutJS, how it uses data-bind etc.
The View Model
So, first up we’ll want an Autocomplete viewModel, to handle the DOM events in the view (e.g. keyup etc.), fetch data from the server and call the correct model to format the received data. It’ll have 2 observable attibutes: suggestions, an array of suggestion objects, and q, the incoming query from the user. As a parameter we’ll pass it the model type to format the suggestions (e.g. LocationSuggestion below) and we’ll have functions to fetch suggestions as JSON from the server (loadSuggestions), add them to our suggestions array (addSuggestion, formatting the data via the model along the way) and clear our array (clearSuggestions), as well a helper function to look for valid character key presses (validKey). None of this is overly complex and it’s well commented, so I’ll just leave the whole class here:
/**
* AutoComplete viewModel. Handles the observable events from the view, requests data from the server and calls
* the corresponding Model above to format fetched data
*
* @param options JSON object of options, to contain:
* - url: URL to request the search results from
* - suggestionEntry: required model (i.e. one of the above) to format the data
*/ function AutoComplete(options){
// KnockoutJS standard is to refer to 'self' instead of 'this' throughout the class. // It's because 'this' in a sub-function refers to the function, not the viewModel var self =this;
$.extend(self, options);
// Array to store suggestions received from the server
self.suggestions= ko.observableArray([]);
// Value of input field that user queries
self.q= ko.observable('');
// Attribute to store the current AJAX request. Means we can cancel the current request if the observable 'q' changes
self.ajaxRequest=null;
/**
* Append a JSON search result to our suggestions array. Instantiates the correct model to format the data
* (view is rendered automatically by KnockoutJS)
*
* @param suggestion JSON object, returned from search server
*/
self.addSuggestion=function(suggestion){
self.suggestions.push(new self.suggestionEntry(suggestion, self.q())); }
/**
* If the user has entered a valid search string (more than 3 latin-ish or punctuation characters), cancel the current AJAX request (if any),
* fetch the data from the server, format it and store in 'suggestions' array
*
* @param obj HTML <input> element (not used)
* @param event The event object for the triggered event (keydown)
*/
self.loadSuggestions=function(obj, event){ // if a valid, non-control, character has been typed if(self.validKey(event)){
self.clearAjaxRequest();// cancel current request var q = self.q(); // if they've entered less than 3 characters, just clear the array, which clears the suggestions drop down if(q.length<3){
self.clearSuggestions(); return; }
// request data from the server
self.ajaxRequest= $.getJSON(self.url,{term: q},function(response){
self.clearSuggestions();// clear out current values for(var i =0; i < response['suggestions'].length; i++){
self.addSuggestion(response['suggestions'][i]);// add search result } }); } }
/**
* Check what key was pressed is valid: if it was alphanumeric, space, punctuation or backspace/delete
*/
self.validKey=function(event){ var keyCode = event.keyCode? event.keyCode: event.which; // 8 is backspace, 46 is delete return keyCode ==8|| keyCode ==46||/^[a-zA-Z0-9\s\-_\+=!"£$%^&*\(\)\[\]\{\}:;@'#~<>,\.\/\?ÀÁÂÃÄÅàáâãäåÒÓÔÕÕÖØòóôõöøÈÉÊËèéêëðÇçÐÌÍÎÏìíîïÙÚÛÜùúûüÑñŠšŸÿýŽž]$/.test(event.key); } }
We also store the current AJAX request with the object in the ajaxRequest attribute. By doing this, we can cancel any existing requests as the user types more keys. So, when they type the first 3 characters, a request is fired off; when they type the 4th character, we’ll cancel the existing request if it hasn’t finished and do a new search for the longer string.
The Model(s)
For this example, I mentioned above that the user could be searching for locations or products; let’s go with a location search for this example. Below, we have a class LocationSearch, which takes a JSON object that was returned from our server, formats the matched string by wrapping <strong> tags around the bit of the string that was matched (via the global accentInsensitiveRegex function, which I unfortunately don’t have the code for), generates the URL for the result and translates the type of location found (.e.g city, county etc.).
/**
* Model for a location search result. Formats data to be displayed in the HTML view
*
* @param data JSON object
* @param q User's original query
*
*/ function LocationSuggestionEntry(data, q){ this.type=(data.type==='region'&&Number(data.id)>999)?'country': data.type;
var separator = data.url.indexOf('?')!==-1?'&':'?'; this.url= data.url+ separator +'autocomplete=1&ac-text='+ q;
this.id= data.id;
this.label= data.label; // wrap what the user typed in a <strong> tag var regexp =new RegExp('('+ accentInsensitiveRegex(q)+')','gi'); this.labelFormatted= data.label.replace(regexp,'<strong>$1</strong>');
So, for the HTML side, we need an <input> field for the user’s query and a <ul> for the search results. The <ul> will obviously be hidden if we’ve no results to show. We wrap the whole thing in a <div> with class autocomplete, which we’ll use when binding the whole thing together later.
For the <input> field, we bind our AutoComplete‘s q attribute to Knockout’s textInput data binding (textInput: q), so that every time the value of the <input> changes, q will too. Additionally, we want to fire our loadSuggestions function, which will check the length of q and fetch suggestions from the server if it’s greater than 3 characters; this is achieved by calling loadSuggestions when a Javascript keyup event is fired on the <input> (event: {keyup: loadSuggestions}).
The HTML for the <ul> is also fairly straight-forward. If we have any suggestions to show, we want to add the has-results class to the <ul> (css: {'has-results': suggestions().length > 0}) and of course hide the <ul> when there’s less than 3 characters typed in the <input> (visible: q().length > 2). Assuming we have suggestions to show, we loop through the suggestions array, displaying an <li> for each, containing the suggestion’s labelFormatted and translatedType, as well as adding some attributes to the surrounding <a> (data-bind="attr: {href: url ...).
To get all this working nicely, you’ll need CSS for the <ul> and it’s <li> children. Additionally, you might want code to look out for when the up and down arrows are pressed on the keyboard and highlight the next row correctly. The code I have for this isn’t mine, so I’m not going to put it here. However, I will point out that to add any fancy jQuery on your view, i.e. to handle these up/down arrow keypress events, you can use KnockoutJS’s custom binding handlers. This is to keep business and presentation logic separate from each other. So, in JS you’d have something like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
/**
* This custom binding is how knockout lets you set up your HTML elements. It's separate from the viewModel, which
* should purely deal with business logic, not display stuff.
*
* Sample usage: <input type="text" data-bind="autoComplete">
*/
ko.bindingHandlers.autoComplete={ /**
* Called when the HTML element is instantiated
*/
init:function(element, valueAccessor, allBindings, viewModel){ var $el = $(element);
// specific jQuery code goes here },
update:function(){}// not needed here };
The HTML for your <ul> would change to <ul class="autocomplete-results" data-bind="autoComplete, css: {'has-results': suggestions ... – note the addition of autoComplete on the data-bind.
Facebook have long had the ability for developers to write custom apps and embed them as tabs in people’s or company’s Facebook profile pages. What I’m talking about here is when you write your own HTML app and host it at e.g. https://fb.mysite.com, which is then embedded into profile pages via an iframe. These apps then have URLs like https://www.facebook.com/profile.site/app_123456789, where 123456789 is your app ID within Facebook.
I’ve written one such app, in PHP, which has sub-pages, so while the root is https://fb.mysite.com, using the app will call pages such as https://fb.mysite.com/product/1. Seeing as this is within an iframe, the URL within the browser remains at https://www.facebook.com/profile.site/app_123456789 while you browse around the app. I recently had a request from a client on how they could link to a sub-page from within a post on their profile page, so they wanted to post something like ‘Check out my product at <link>’, where clicking on the link will load up the iframe app and bring them to the specific product. This is achievable, but it’s not exactly straight-forward and requires some work on the developers behalf. In an ideal world the link would simply be https://www.facebook.com/profile.site/app_123456789/product/1!
The way I managed to achieve this was using Facebook’s app_data parameter. Here, you can pass any data you want to the variable and it’ll end up as part of the signed_request variable in $_REQUEST at https://fb.mysite.com/index.php . The way we’re going to structure these deeplinks is to pass a JSON object to app_data containing a page key, with the sub-page we want, in this instance products/1, so our deeplink is going to look like https://www.facebook.com/profile.site/app_123456789?app_data={page:products/1} . Not exactly elegant but it’ll have to do! You could simply set app_data to products/1, but there may come a time when you want to pass other data in addition to the page, so I opted to go down the JSON route.
Now that we know what to expect, we need to decode $_REQUEST['signed_request'] (which should be available to your https://fb.mysite.com/index.php), json_decodeapp_data from the result, validate page, then redirect the browser accordingly.
To decode $_REQUEST['signed_request'] I used Facebook’s PHP SDK. Once we have the signed request as an array, we decode the JSON from app_data. Then, we check for the presence of page, validate it (I’ll leave the validation code up to yourself!) and send them on their way. This is pretty straight-forward, so is probably best illustrated with some code:
TL;DR The MultiViews option in Apache automatically will map e.g. /xfz/ to /xyz.php
I was recently creating a new section of the website I work for and decided to opt for tidy URLs, for SEO purposes, instead of our standard.long?url=format URLs that we have elsewhere. Let’s say the new section I was creating was called David’s Boxes, so I wanted to have relative URLs like /davids-boxes/big/blue map to davids-boxes.php?size=big&colour=blue. Purely co-incidentally, there happened to be a defunct davids-boxes folder in our www directory, which contained an old WordPress install, which I prompty deleted (more on this later). Then, I set up rewrite rules in our www/.htacess to do the example mapping above.
Everything was working fine locally: /davids-boxes/ matched to /davids-boxes.php and /davids-boxes/big/blue mapped to /davids-boxes.php?size=big&bolour=blue, all as expected. However, when I put the .htaccess file onto our test server, I couldn’t get the rules to match properly: everything mapped to the basic /davids-boxes.php, i.e. with no extra GET parameters. I tried different order of rules, moving the rules to the top of the .htaccess etc., but nothing worked. Then I simply deleted the rules from the .htaccess, expecting /davids-boxes/ not to map to anything, but it still strangely mapped to /davids.boxes.php as before. This led me to believe there was another rewrite rule somewhere else (a fact that was also helped by the previous WordPress install). Searching the entire codebase, which includes all ‘sub-‘.htaccess files, yielded no results, so then I began thinking it might be the server…
I had a look in our sites-available Apache configs, expecting there may be some sort of obvious generic rewrite to map any e.g. /xyz/ to xyz.php; no such luck. Going through each line in the config, I noticed we had the FollowSymLinks and MultiViews options enabled in the <Directory> tag. I was familiar with the former, but not the latter. Investigating into MultiViews, it turns out this was the thing doing the automatic mapping I was experiencing! The documentation states “if /some/dir has MultiViews enabled, and /some/dir/foo does not exist, then the server reads the directory looking for files named foo.*, and effectively fakes up a type map which names all those files”. Such relief to figure it out. I checked with our CTO, he didn’t know how it got there, so after removing it on testing and doing a quick test, we got rid of it everywhere and my problems were solved.
I was doing a bit of work with Canvas recently, manipulating images in the browser and writing the results out to files. I was looking for a package that could do various effects on the images, such as sharpen, blur etc. and came across the Pixastic package used in similar applications. However, unfortunately, the website for the package is currently down and I couldn’t find much documentation on it anywhere. So, I had to look at the source code to figure out how to call the various functions. Not the end of the world, but I just thought I’d stick some simple examples here, to maybe help get others started with the package and to direct them to the source code for more information!
I got the source code from JSdeilvr.net, which is minified, but there is unminified source on GitHub.
The 3 functions I was looking to use were sharpen, brighten/darken and blur. I’ll go through each individually. Firstly though, I’ll mention that the first parameter to each function is a Javascript Image object, which obviously has a src attribute with your image’s contents. The whole point of this was that I was building a tool with buttons which, when clicked, would perform each of the functions above on the current Image. When applying a filter, it’s best track the current level of e.g. brightness, increase/decrease this value, reset the image to how it originally looked and then apply the filter with the new value. This is better than say brightening it by 10%, then brightening the result by 10% again.
Seeing as I used brightness for the example above, I’ll start with that. Also, to darken an image, you simply reduce the brightness value, obviously.
// initial code setup var img =new Image();
img.src='whatever';
img.original_src= img.src;// for doing the resets // add the current value of each filter to the Image
$.extend(img,{brightness:0, sharpen:0, blur:0});
// ...
// 'Brighten' button click handler
$('#brighten').click(function(){
img.brightness+=2;// brighten by 2 (for darken, reduce by an amount, will work with negative values) // max brightness of 255 if(img.brightness>255) img.brightness=255;
// now we reset the image by creating a copy
img.src= img.original_src; // now, apply the filter
img = Pixastic.process(img,"brightness",{
brightness : img.brightness }); });
For sharpen, I found it best not to sharpen by the same amount each time, it just didn’t lead to nice results.
// now we reset the image by creating a copy
img.src= img.original_src; // now, apply the filter
img = Pixastic.process(img,"sharpen",{
amount : img.sharpen }); });
Lastly, blur was a bit more straightforward, increasing by 1 every time:
1 2 3 4 5 6 7 8 9 10
$('#blur').click(function(){
img.blur++;
// now we reset the image by creating a copy
img.src= img.original_src; // now, apply the filter
img = Pixastic.process(imageObjs[selected_image].image,"blur",{
amount : img.blur }); });
As I’m sure you’ve spotted, there’s a certain amount of repetition in the code above, starting with the line “// now we reset…”. What I did was to write a function called applyFilters, which you call once you’ve calculated your new value for brightness/sharpen/blur. This will then reset the image and apply all 3 filters. With the code above, if you were to say brighten, then blur, only the blur would be applied, as the image is reset each time. Doing it this way removes that problem.
Sometimes in work we can be asked to do things we don’t like and recently I was asked to look into implementing one of those homepage takovers. Personally, I think these are awful and would like to think I wouldn’t degrade my site by implementing one, but they do make money and have a high click rate, so I can see why sites like to use them.
Normally they’re done using a fixed background wallpaper that’s clickable all the way to the edge of the page. However, I was asked to simulate this look using 2 existing skyscraper ads, 170px in width, to be positioned either side of the main content and fixed to the top of the page. Since it wasn’t entirely straightforward, I thought I’d block about it here, to help anyone else in a similar situation. I’m not going to go into the specifics of displaying the ads, simply the CSS and Javascript involved in positioning them where you want them.
I should point out, this might be possible with just CSS, but changing a site’s fundamental structure to accomodate the new columns isn’t always possible. Also, you might only want the takeover on the homepage and not other pages. This solution should have minimal impact, as it simply adds 2 divs, that can go anywhere in the HTML.
So, to describe the set-up, let’s say our main content is 1000px in width, centred in the page and we want 2 170px x 1068px divs to contain our ads and line up on the right and left of that content, as well as for the 2 ads to remain fixed at the top of the page, no matter how far we scroll down. We’ll give each of these divs a class of side-banner, with left-banner and right-banner IDs. Since these are going to be positioned explicitly, it doesn’t really matter where in the HTML you put them, maybe just inside your main content div. Initially, we’re simply going to position them in the extreme top corners of each side. I’m also going to give them different background colours, so we can know they’re positioned correctly without having to worry about loading the ads (which can come later).
To align these alongside the content, I needed to write a Javascript function (position_banners()) to position them correctly. This function is called when the page finishes loading, as well as when the window is resized. It simply gets the body’s width, subtracts the width of our main content (remember 1000px), divides the result by 2 (as we’ve 2 sides), then further subtracts the width of our banners. This fairly basic formula works out the amount to move each div in from their corresponding edge, to line up with our main content. Then, we just use CSS left and right to position them nicely.
function position_banners(){ var margin =($('body').width()-1000)/2- $('#left-banner').width(),
left =Math.floor(margin),
right =Math.ceil(margin);
$('#left-banner').css({left: left +'px'});
$('#right-banner').css({right: left +'px'}); }
I know this code isn’t the tidiest, but should be enough to get the idea of what you need to do.
To further enhance the ‘takeover’ effect, you could display a 970px x 250px ‘billboard’ right at the top of your main content.
I often see different ways of storing and displaying opening hours for various businesses when browsing the web. Some seem to simply store them as an unstructured blob of plaintext and spit that back to the user. Others will store the exact times for each day and display all 7 days on 7 separate rows.
I think neither of these options are great and I’ve come up with what I think is the ideal solution. Basically, we want to store the exact time for each day, but group similar days together. So, you could have lines like ‘Mon – Thurs: 9 – 5’, but if it’s say Tuesday at 10am, you’d also know that the business is currently open. I also think you could have alternative text to display when the business is closed (indicated by open and closed being NULL), so you could have something like ‘Sat – Sun: Open by appointment’.
So, let’s start with our table layout, see below. The primary key is id, which is just your standard auto_increment value. business_id is if you have multiple businesses, each with their own opening hours, as was the case for me. You might want to build an index on this field too. If you’re just storing your own opening hours, dow could be the primary key and you could drop those last 2 fields. open and closed are just simple strings, to store the time in 24-hour ‘HH:MM’ format. Doing it this way, you can still to comparisons like WHERE open > '09:00' and get the result you were expecting.
1 2 3 4 5 6 7 8 9 10
+---------------+-----------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+-----------------------+------+-----+---------+----------------+
| id | mediumint(9) unsigned | NO | PRI | NULL | auto_increment |
| business_id | mediumint(8) unsigned | NO | MUL | NULL | |
| dow | tinyint(1) unsigned | NO | | NULL | |
| open | char(5) | YES | | 09:00 | |
| closed | char(5) | YES | | 17:30 | |
| optional_text | char(100) | YES | | NULL | |
+---------------+-----------------------+------+-----+---------+----------------+
Next up, I want to show you a quick function to format the time. Being a programmer, the time ’13:30′ is easily translated to ‘1.30pm’, but for the general public it might not be so simple, so this function will display your time in a more human readable format, such as the example given above. Basically, we want to drop any leading 0s, any 0-value minutes, convert the time to a 12-hour version with ‘am’ and ‘pm’ and change the ‘:’ to a ‘.’ (this last bit is probably a bit region-specific). For midnight, we’ll store that as ’24:00′ (which I know is technically the start of the next day!) and display that to the user as ‘midnight’, instead of the slightly confusing ‘0am’.
/**
* Function to change an opening hour to a more human readable version,
* e.g., 09:00 to 9am or 13:30 to 1.30pm
*
* @param String $time Time to format, in HH:MM format
*
* @return String Formatted time
*
*/ function format_opening_hour($time){ if($time=='24:00'){ $new_time='midnight'; } else{ list($hours,$minutes)=explode(':',$time); $hours=ltrim($hours,'0'); $am_pm=($hours>=12) ? 'pm':'am'; if($hours>12)$hours-=12; $new_time=$hours; if($minutes!='00'){ $new_time.='.'.$minutes; } $new_time.=$am_pm; } return$new_time; }
OK, so displaying the data in a nice table, with similar days grouped together, is the next bit. We’ll have 2 columns, one for the day/days, the other for the time. If a day has a value for optional_text, then that value will be displayed and the times are ignored. I’m also going to add another block of optional text ($extra_text below) that will be displayed at the end of the table and is applied for all days, to be used for something like ‘phone anytime’. Finally, there’s a $short_day_names option, so you can choose between say ‘Mon’ and ‘Monday’.
I should also mention at this point: I’m returning a block of HTML here from a function, as well as mixing business logic with display logic; I realise this is generally a bad idea and some of this could be split into a function and a template, but seeing as it’s a simple 2-column table, I just kept it all together.
/**
* Function to generate a simple html table for a business' opening hours
*
* @param Array $opening_hours Array of rows from opening_hours table, sorted by dow (0-indexed, starting with Monday)
* @param String $extra_text Extra block of generic text that applies to all days, goes at end of table
* @param String $short_day_names Whether to use e.g. 'Mon' or 'Monday'
*
* @return String HTML <table> output
*
*/ function opening_hours_table($opening_hours,$extra_text='',$short_day_names=false){ $dow=array( array('long'=>'Monday','short'=>'Mon'), array('long'=>'Tuesday','short'=>'Tue'), array('long'=>'Wednesday','short'=>'Wed'), array('long'=>'Thursday','short'=>'Thu'), array('long'=>'Friday','short'=>'Fri'), array('long'=>'Saturday','short'=>'Sat'), array('long'=>'Sunday','short'=>'Sun') ); $key=($short_day_names) ? 'short':'long';
// first, find similar days and group them together if(!empty($opening_hours)){ $opening_short=array(); // start with current day for($i=0;$i<7;$i++){ $temp=array($i); // try to find matching adjacent days for($j=$i+1;$j<7;$j++){ if(empty($opening_hours[$i]['optional_text'])&& empty($opening_hours[$j]['optional_text'])&& $opening_hours[$i]['open']==$opening_hours[$j]['open']&& $opening_hours[$i]['closed']==$opening_hours[$j]['closed']|| !empty($opening_hours[$i]['optional_text'])&& !empty($opening_hours[$j]['optional_text'])&& strtolower($opening_hours[$i]['optional_text'])==strtolower($opening_hours[$j]['optional_text'])){ // we have a match, store the day $temp[]=$j; if($j==6)$i=6;// edge case } else{ // otherwise, move on to the next day $i=$j-1; $j=7;// break } } $opening_short[]=$temp;// $temp will be an array of matching days (possibly only 1 day) } }
$html='<table>'; $colspan='';
if(!empty($opening_short)){ $colspan=' colspan="2"'; foreach($opening_shortas$os){ $day_text=$dow[$os[0]][$key]; if(count($os)>1){// if there's another, adjacent day with the same time $end=array_pop($os);// get the last one $end=$dow[$end][$key]; $day_text=$day_text.' - '.$end;// append the day to the string } // at this point, $day_text will be something like 'Monday' or 'Monday - Thursday' if(!empty($opening_hours[$os[0]]['optional_text'])){ // optional string takes precedent over any opening hours that may be set $hours_text=htmlentities($opening_hours[$os[0]]['optional_text']); } elseif(!empty($opening_hours[$os[0]]['open'])){ // otherwise generate something like '9am - 5.30pm' $hours_text= format_opening_hour($opening_hours[$os[0]]['open']).' - '.format_opening_hour($opening_hours[$os[0]]['closed']); } else{ // if nothing, it must be closed on that day/days $hours_text='Closed'; } // new row for our table $html.='<tr>
<td>'.$day_text.':</td>
<td>'.$hours_text.'</td>
</tr>'; } }
// append the extra block of text at the end of the table if(!empty($extra_text)){ $html.='<tr>
<td'.$colspan.'>'.htmlentities($extra_text).'</td>
</tr>'; }