Download Example Files (from above) and extract to folder
Open Hugin and import source files
Set the lens parameters (Focal Length 8.7mm, Crop 1.5), Full Frame Fisheye for images taken around at 90 degree yaw intervals, Circular Fisheye for the Zenith and Nadir images. Assign lenses so that the the first 4 images are Lens 0, Zenith is Lens 1, Nadir is Lens 2.
Mask each image. For the first 4 images, mask off the tripod/panohead and any other extraneous objects, also the corners. Set crop circles for the Zenith and Nadir.
Now add control points to each overlapping pair of images. Concentrate on placing a good spread of points along the centre of the overlapping regions. There is no need for placing points away from the likely seam lines.
Add vertical control points/lines so that the pano will be level
Start optimising the position, gradually including field of view, barrel, then everything except translation. Periodically check for control points with large errors, correct if necessary. Use the Custom Optimisation options to fine tune the process.
Check that the geometry of the assembly is satisfactory using the Preview Window.
Start on optimising exposure. Choose Low Dynamic Range, then Custom. Deselect optimisation of Vignetting and Camera Response. Optimise just the exposure values, the the Red and Blue Multipliers.
Finally, stitch and export the panorama in your preferred format.
You try stitching the image using the automatic mode in Hugin, or in any other panoramic programme of your choice.
I had the opportunity to visit London and managed to find some time to shoot some panoramas. The weather wasn’t ideal with scattered showers and cloudy skies but the rain managed to hold off for the time I had available. In the lead up to Christmas, festivities were in full swing so Christmas lights, trees, and markets would provide colourful subjects.
I use a Sony A7m3 “fullframe mirrorless” camera. Previously I used a Sony A5100 with an APS-C sized sensor. They both have 24 megapixels but the A7m3 has considerably better dynamic range and high ISO performance.
I use the Samyang 12mm/2.8 Fisheye lens. This is a “fullframe fisheye” lens, “fullframe” refers to the fact that the image circles covers the entire sensor. Previously I used the Samyang 8mm/2.8 Fisheye lens that is a “fullframe fisheye” on the A5100 APS-C camera. The 12mm/2.8 has better flare resistance that is noticeable on night scenes.
I use the Nodal Ninja 3 Mk III panohead with Rotator Mini and Nadir Adapter. I upgraded from the Nodal Ninja 3 Mk I/II hybrid that had served me well shooting the Konica-Minolta 7D, Sony A700, A580, A77, NEX-7, and A5100 with a Peleng 8mm, Sigma 10mm, Samyang 8mm/3.5, and Samyang 8mm/2.8 over the years. The new panohead fits my upgraded rig a bit better and the Nadir Adapter is a time saver. I use a Benro C-169M8 Travel Tripod with a Manfrotto 496 ballhead.
The raw files were imported in Capture One 20 Express (for Sony). I selected the frames that would be assembled. There were many spare frames since I would shoot extra shots to capture moving objects (people, cars etc,), high contrast scenes (usually base exposure and -2EV), and to fill in shadows.
The selected frames were then adjusted in a batch. First thing is to apply the same white balance setting. Shots taken at night under a variety of mixed lighting can be challenging to get the white balance “right”. Usually, I try to balance the colour temperatures throughout the scene, taking care with the green-magenta shifts that can occur with fluorescent and LED lights. Other common settings include chromatic aberration correction, black level point, sharpness fall-off, and clarity.
Each frame is then adjusted for exposure, It is critical to preserve the highlights, this is done by a combination of highlight recovery and overall exposure. Some shadow recovery can be used to prevent the darker areas being crushed to black but there is no need to fully recover shadows at this stage.
I usually leave the saturation and contrast untouched. These can be adjusted after stitching. Occasionally, I may need to reduce the contrast of the nadir shot so that it blends in with the edges of the other shots.
The adjusted raw files are now exported as 16-bit TIFFs in Adobe RGB colour profile. These are then imported into Hugin for assembly and stitching. There are other good programs for creating panoramas including PTGUI but Hugin has the advantage of being free. Hugin is also quite powerful, allowing detailed control of the alignment and exposure optimisation process. For a more automated solution, PTGUI is probably a better choice.
After the images have been imported into Hugin, the first step is to mask the images to remove the panoramic head and tripod that may intrude into the edges of the frame. The Zenith and Nadir are also cropped to circular regions. It is useful to define the focal length and projection type of the lens, 13.1mm and full-frame fisheye are used for the Samyang 12mm/2.8 Fisheye. I assign the same lens to the horizontal frames and separates ones to each of the Zenith and Nadir. The Nadir in particular will need to be optimised differently to the other frames in order to compensate for the likely shift in position.
Next step is to add control points (CPs) between overlapping frames. There are automatic methods but I prefer to manually add points. I find that I spend more time fixing automatically generated CPs than if I had added them manually. I start off by connecting the horizontal frames. To get a good stitch, all that is needed is that the frames match along a seam line, generally placed along the middle of the overlap region. Hence, I only add CPs along a line that stretches from the top to the bottom of the overlap. For the Nadir, I add 2 to 3 CPs per overlap with the horizontal frames. Extreme distortion parameters will usually need to be used to get it to match up with the rest of the pano.
After the CPs have been added, then the position optimisation process than begin. I start with optimising just the position. Next I optimise position, angle of view, and barrel. Finally, I optimise all parameters except translation. Throughout, I check for any CPs that have large errors as this might indicate that they may be incorrectly placed. I either correct the placement or delete them if there are sufficiently many other CPs.
Hopefully, after several rounds of optimisation and tweaking of the CPs, the errors will be small (maximum of a few pixels in a 14Kx7K pano) and visually the pano looks aligned with no obvious errors in the preview window. Additional masks may be added to remove moving objects or else control what elements from each frame end contribute to the final pano.
The next stage is photometric optimisation. For many years now, panoramic software have been able to cope with source photos shot at difference exposure values and stitch them together in a seamless manner. This allows a more flexible and arguably optimum manner to capture the initial source photos . I will use autoexposure to create a base layer together with bracketing of selected frames to capture blown highlights. In the exposure optimisation stage, Hugin can exposure match all the frames in a near seamless way. I can then export the full pano at various brightnesses which have shadow, mid, and highlight detail separately. These are then combined using Enfuse to produce a single pano that incorporates the detail in both shadows and highlights in a process called exposure fusion.
The exported pano (exposure fused if necessary) from Hugin in equirectangular formal is then post-processed. I use Picture Windows Pro 7 as my main image editor , with Photoshop 6 for cases where I need to do spot removal or extensive cloning. First step is to adjust the contrast using curves. I’ll first bring up the shadows as these will be a bit dark as I try to protect the highlights in the previous steps. Then I’ll apply an S-curve to boost mid-range contrast and compress the highlights. The preserved highlights can look unnaturally dark so I’ll bring them up near the top of the histogram without blowing them out.
Sharpening of the image is very important. I’ll perform a large radius unsharp mask, coarse sharpen, and a fine sharpen. The sharpening process utilises pixels in a region hence problems can occur at the +/-180 degree boundary of the equirectangular image. To avoid a visible seam arising from such non-global adjustments, I’ll create a pano 720 degrees (or 1080 degrees) wide before apply the filters. Then I’ll crop out a central 360 degree section to get back to a regular equirectangular projection.
Final colour adjustment is applied, both selective and global saturation. Any touch up (spots, shadows, or minor stitching errors) are then performed in Photoshop. I produce both a final TIFF and a JPG that is in sRGB for export to the web.
 Instead of using the same exposure value for all source frames, you can expose each shot optimally. For example, a scene may have 20 stops of contrast between one direction and the opposite, e.g. sun and shadow. The traditional method would be to use manual exposure and bracket, e.g. -6, -3, 0, +3, +6EV and use the camera DR to capture the ends of the contrast range. The +6EV shot would cover the shadow and the +6EV shot would cover the highlight. But this method is wasteful since usually the contrast in a single shot is less than the full bracket range. Often only a single shot in that direction is required, or else a second bracketed shot will cover the remaining fraction of high contrast cases. Instead of 30 shots (4+1+1 shoot pattern times 5 bracketed shots), it is usually possible to only use 7 or 8 shots in total. The final result is as good as the fully bracketed version but with a considerable reduction in shooting time, this can be significant in low light conditions but can also be a major factor in situations where there are moving objects and/or changing lighting conditions.
 Picture Windows Pro 7 is now free. A newer version has been developed by version 7 is still a very useful image editing program. The paradigm is different to that of Photoshop so at first glance it may look to be fairly crude. But it is small, quick, and easy to use for many routine image adjustment tasks.
 I have no commercial relationship with any companies or their products mentioned in this post, other than being a satisfied customer of some of them.
I had the pleasure of visiting York for several days and was able to take a few panoramas of the Minster. Tourist entry is £10 (or £15 which includes a visit to the Tower), it costs a lot of money to maintain such a large and historic building so this entry charge is not so unreasonable. Rather progressively, there are no prohibitions on photography or tripods (as long it’s not for commercial use and your tripod won’t damage the floor). For a panographer, this was a great opportunity and well worth the visit.
The main challenge, as is common to churches, is the huge contrast between the darkest shadows and the brightly lit windows. Doing HDR capture is the surefooted, though laborious default procedure. Often HDR and tonemapping gives impressionistic results, derogatively called the “HDR-look”. It isn’t the fault of HDR capture per se but rather the tonemapping settings that are usually to blame for garish results.
Alternatively with some judicious bracketing of just the relevant areas, one can produce decent results by exposure blending the shadow and highlights. This is what I have done here, using a moderate amount of exposure compensation on the brightly lit East and West ends of the Minster to retain some detail that would otherwise have been washed out.
The recent viral photo of a dress of controversial colour demonstrates that the perception of colour is a complex psychovisual phenomenon that involves not just the physical electrical impulses produced by the light receptors in the retina but also interpretation by the brain based on context and expectation. It also underscores the importance of getting your exposure and whitebalance correct.
Philosophers have used colour in debates about the notion of “qualia“, or the “the ways things seem to us“. From a photographic perspective, we can understand how a dress which under normal lighting looks to be royal blue with black lace can look in the photo to many people as being white with gold lace. There are two issues with the disputed photo that contribute to the confusion, it is badly exposed and the auto white balance is extreme.
I assume that the photo was taken by a phone and that the autoexposure was fooled by the darkness of the dress. The photo was consequently overexposed leading the royal blue parts of the dress being rendered a much lighter shade of blue. The dark lace parts of the dress are also brightened and now longer looks black.
The second issue is that the camera then tries to colour correct the image by setting a white balance. It misinterprets the scene as being taken under high colour temperatures, e.g. at dusk with the illumination of a clear blue sky. To compensate, it changes the colours in the scene, effectively trying to make the blue dress look more white as it assumes that this is “actual” colour of the dress. To bring blue closer to white, the camera adds yellow to the scene (you can tell from the yellow colour cast to the background). The dark lace of the dress, already made a lighter shade by the overexposure, is now given this yellow colour cast which can be interpreted as being a golden colour, especially in the context of the rest of the dress which is rendered as palish blue.
These two effects combine with the way the brain tries to work out what the colour of objects are from visual stimuli. Since our vision has to operate under many different types of lighting, what we perceive as colour of an object is not actually the actual colour cone responses, but what our brain retrodicts based on our assumptions about the lighting. An apple under noon day lighting will look different compared with viewing at dusk, but we don’t usually say that the apple has changed colour because our brain will take the different lighting conditions into account and interpret that the apple is the same colour. The case of the dress is simply a case of the camera being fooled together with the way our vision system operates.
Previously, I discussed selecting a tripod and head. A tripod opens up a whole new vista of photographic possibilities but to make the most of the stability of a tripod, proper technique is needed.
Stability and Mounting
The first consideration is making sure you’re not overloading your tripod legs and head. The heavier your equipment and the narrower the angle of view (more zoom), the more sensitive will the final picture be to residual vibrations and external forces such as wind. I seldom shoot at focal lengths greater than 200mm so my tripod needs are comparatively simple. If you have 300mm or longer zooms, a large sturdy tripod with a gimbal mount is recommended.
If your tripod does not have a quick release, you’ll be screwing your camera directly onto the mounting plate of the tripod head. One thing to watch out for is that if you take a photo in portrait format, the camera can twist and unscrew itself. You may find this happening if you tilt the camera to the left (so that the handgrip is to the top). By rotating the camera so that the handgrip is to the bottom, then the torque of a heavy lens will tighten the mounting screw instead and preventing further rotation.
If you find yourself having to mount and dismount your camera from the tripod repeatedly, a quick release plate might be a good investment. It’s increasing rare to find tripod heads without them in any case. The de facto standard mounting solution is the Arca-Swiss plate which has a dovetail and a matching clamp. Do note that some variation exists between nominally compatible plates and mounts from different manufacturers. There are many other types of quick-release mechanisms to suit cameras of different sizes and applications.
You should avoid raising the centre column of the tripod if possible as this makes the whole system more flexible. For the same reasons, you should spread the tripod legs out to give a wide base.
The main priority of using a tripod is stability, whether for long exposures, or simply to get rid of hand shake. To reduce the possibility of introducing vibration when activating the camera, you can use a remote shutter release. The traditional mechanical remote release is a plunger attached to a sheath and cable screwed into the shutter. This was superseded by electronic wired remotes which allowed functions such as an intervalometer and times long exposures (minutes to hours). Wireless remotes (usually IR) are becoming popular nowadays. Any of these will prevent extraneous vibration due to touching the camera when starting the exposure.
If you find yourself without a remote release, your camera may have a shutter delay function, usually 2 or 10 seconds. The 10 second delay is to allow you to get into the shot, the 2 second delay is so that after pressing the shutter release, the camera and tripod has time to settle before the actual exposure. If you do not need to trigger the shutter at an exact time, then a 2 second delay can often work as well as a remote release.
Even if you use a remote release or camera timer, there are other sources of vibration that could impact upon sharpness. In an SLR, the mirror has to be raised out of the beam path before the shutter activates. This raising of the mirror can lead to vibrations. Some DLRs have a mirror-lockup function that allows the vibrations due to the mirror being raised to dampen. It is often combined with the 2s timer delay. “Mirror-slap” will usually have greatest effect at an low to intermediate shutter speed, a critical range of exposures where mirror-lockup gives greatest benefit.
Even on non-SLR cameras, the shutter actuation can cause vibration. The opening of the front-curtain can cause blur. Some cameras have “electronic first curtain” (EFC) where the physical first shutter curtain is replaced by an electronic means of starting the exposure. This eliminates any vibration at the start of the exposure.
Sometimes the ground itself can vibrate, e.g. bridges. In this situation, it will be difficult to achieve sharp results during long exposures. You may want to time your exposures in between passing cars or pedestrians. On some surfaces, the distortion caused by your own shifting weight can cause the tripod to move during long exposures.
One thing to watch out for is image stabilization, either lens or camera based. Some image stabilization systems do have modes specifically for tripod use but often you’ll get better results if stabilization is switched off when mounted on a tripod.
If your camera can shoot “RAW” photos instead of JPG, use it. I won’t go into the arguments people have against shooting RAW, I think the best way to make the case is through an example. To the left is a shot at sunset taken under a bridge. It’s not a particularly great photo either technically or artistically, but I hope illustrates the degree of control you have when shooting RAW.
This is as it would have come out of the camera if shot in JPG (I used the default settings in Lightroom to simulate this). There is extreme contrast, ranging from the setting sun, the back-lit buildings and barge, and the ground in shadow. The colours are muted compared with what I had experienced watching the sun go down. Setting the white balance is always tricky at sunset or sunrise as the sun itself is strongly filtered to give warm light, whereas the rest of the sky gives a blue cast.
I exposed the shot so that the sunset would not be overblown. I used a low ISO, medium aperture and a wide angle lens to capture the scene. I made sure than there was little camera shake in the captured shot. If I’d exposed for the shadows, the sunset and sky would have been blown out. It would have been very difficult, if not impossible, to recover that amount of overexposure, especially from a digital capture. However, the dynamic range of my camera is sufficiently high that I should be able to dig out the detail in the shadows from this shot.
After importing the RAW file into Lightroom, I cropped, straightened, and adjusted the image using the exposure, black and white points, highlight and shadow recovery sliders. I also played around with the whitebalance until I obtained a result that I was happy with. There’s no one “correct” whitebalance in this situation, I simply chose one which balances the blues in the sky and water with the reds and oranges of the setting sun. You can play around with warmer or cooler images to get the impression you desire. The end result of my adjustments is a “neutral” image, one that has detail at both ends of the brightness range and which does not look too unnatural. Even such a normal looking image would have been difficult to produce using film.
To give the image a bit more “punch”, you can adjust the contrast, clarity and vibrance. The highlight and shadows may need a bit of extra recovery, I prefer to retain some detail rather than crushing the blacks or totally blowing out the highlights. I don’t claim that the final result conforms to good taste but is certainly more eyecatching (or eyesearing) than the original photo.
This range of post-capture adjustment is due to shooting RAW. I was able to leverage the information captured by the sensor in a single frame and in Lightroom process it to my needs. Even if you do not perform such extreme modifications to the image, the ability to extract shadows and highlights, together with whitebalance is invaluable. It is all about giving you options for the final image.
Contrast is the bane of the photographer. The human visual system is highly adaptive in its response to light levels and hence our experience is a highly compressed version of the “real” scene. Image sensors have difficulty in recording large extremes in light intensity which may be present in a typical scene. For instance, sunlit cloud may be tens of thousands of times brighter than deep shadow so exposing for one will leave the other either completely black or burnt out. To give the same visual impression, we need to compress the variation of light level in a scene to a more manageable level.
There are two steps to this, a) the capture of the image data, b) the display of the captured data. The first step is called high dynamic range imaging (HDRI) capture. Various options exist for the second step, though what most people call HDR photography is one option called tone-mapping.
HDRI capture usually involves taking am exposure bracket sequence of images to capture both the shadows and highlights of an image. Depending on the scene, this may require exposures -5EV to +5EV as well as ones in between to cover the gaps. These brackets are then merged into a single HDR file containing a scene-referred set of pixel values. To cope with the large range of possible values, either high bit-depth or floating point numbers are used.
After the creating of a HDR file, it is usual to create an output-referred file for display. Tone-mapping is a popular option by which many gaudy, surrealistic photos are created (I include an example to the left). This is what many people associate with the term HDR.
It does not have to be this way, many tone-mapping methods produce naturalistic looking images which are not obviously distinguishable from a “normal” photo (apart from being able to represent scenes of extreme contrast). Exposure fusing (my preferred alternative method) bypasses the production of a scene-referred HDR file and tends to give more natural looking images.
From my experiences in HDRI, the most important aspect is the capture and conversion of the bracketed images ready for merger. My tips are:
Exploit the natural dynamic range of modern digital sensors to bracket at 3EV. Smaller bracket intervals (e.g. 1EV) may be useful in reducing noise but are not strictly necessary with careful post-processing.
White balance during RAW conversion is extremely important. Incorrect white balance will lead to colour casts in the merged HDR file which can be inconsistentm especially detrimental to panorama creation.
During RAW conversion, try to linearise the output as much as possible, e.g. using minimum contrast and a “linear” curve. Most RAW converters (e.g. Lightroom) produce output files with gamma curves applied along with exposure tweaks. Choosing minimal contrast will increase the amount of image values falling in the middle of the exposure curve hence reducing the effect of the tone curve at the ends.
Try mid-tone emphasis (in Picturenaut) when merging files, especially when using minimum contrast conversion as above. This also reduces the effect of end effects of the RAW converter tone curve.
The great thing about street photography is that there are so many different ways of doing it. Some people stick with a rangefinder with a 35mm or 50mm lens. Some use mobile phones. Other prefer using a longer lens or telezoom on a professional DSLR. Whichever lens you decide to use affects the process and ultimately the look of the photos you take. A conscious choice of lens is a conscious choice of the style you choose to shoot and the scenes you want to portray. But there are also many practical reasons for choosing one lens over another.
Primes versus Zooms
It could be that all you have is the kit zoom which came with your camera. Zooms are fine for starting out, they give a flexibility in framing, reduces the need for lens swapping, and maybe all that you have. If you’ve shot a bit with your zoom, you can have a look at the focal lengths you’ve used more often. I found when I did this that the wide end was dominant. I preferred to get in close and take a more intimate photo of the subject, or else take in a larger scene incorporating greater context.
Lots of street shooters tend to work with single focal length lenses (so-called prime lenses). There are practical and psychological reasons for this. Practically, single focal length lenses can be made with larger apertures for low-light work, and also for greater background blur. On conventional rangefinder cameras, primes are your only choice (Fuji X-Pro1 excepted or a Leica M digital, or Leica M with Visoflex). Psychologically, it’s one less decision factor to take care of when making photos. A single focal length can act as an artistic constraint forcing you to concentrate your creative energies on positioning, lighting, framing, rather than zooming in and out of a scene.
Focal Length Choice
So if you do decide to shoot with a prime lens, then it boils down essentially to whether you want to use a wide, medium or long lens. It’s tempting when starting out to use a long lens as this allows you to stand back and “pap” (act like a paparazzo) people from afar. It is tempting to keep away from your subjects if you are not confident but the perspective often has the effect of being impersonal. Also, a large telephoto lens does make you stand out more, especially if attached to a big DSLR. A short tele, e.g. 85mm on film, makes a good portrait lens though so could do double duty.
A favourite lens is the 50mm (~35mm on APS-C, ~25mm on M4/3), the so-called normal lens. It’s gives an in-between angle of view, neither very much telephoto or wide angle. This can make it either versatile or boring depending on how you look at this things. It’s popular partly due to the fact that large aperture 50mm lenses are relatively cheap compared to other focal lengths. Cheap 50mm/1.4 manual focus lenses are easy to come by. Even AF versions of normal lenses aren’t terribly expensive. It’s probably a good place to start, if you find yourself stepping back to capture a scene, or wanting to crop in a lot to isolate an individual, then you can step up or down in angle of view by choosing a different focal length lens. But if you can live with tweaking your frame by stepping a little bit back or forwards, then the normal prime may be good for you.
Going wider (but not especially wide), the 35mm (~24mm on APS-C, 17mm on M4/3) is also a very popular focal length/field of view. It gives a moderately wide angle of view, but not obviously so. It allows approaching the subject a bit closer with the same approximate framing as a 50mm, or else capturing a usefully larger scene at the same distance.
Even wider lenses can be used for effect, 28mm (~19mm on APS-C, 14mm on M4/3) or even 24mm (~16mm on APS-C, 12mm on M4/3) allows really close approaches to the subject. This suits a fairly intimate style of candid photography best done in crowded scenes where even though you’re within a metre or so of the subject, they do not notice you taking photos.
Autofocus versus Manual Focus
A lot of the time, the choice is driven by necessity. Cheap manual focus lenses may be all that one can afford. AF versions of desirable focal lengths and apertures may not be available. You may be using a conventional rangefinder camera. One is forced to use manual focus lenses in these cases. Luckily some AF cameras have focusing aids for manual focus. On SLRs, you may be able to get a focusing screen optimised for manual focus, usually with a split prism. On LiveView cameras, many offer a magnification function blowing a region of interest allowing fine focus. Other cameras also offer focus peaking, a coloured outline of areas with high contrast usually indicative of in-focus regions. Using manual focus lenses for street photography is an acquired skill, it can take a bit of practice to use it effectively.
In some circumstances, AF may be more appropriate. It is usually faster, allows those snaps shots and reduces the time you have the camera up to your face and the possibility of alerting your subject. AF allows completely unseen operation, from the hip for instance. In bright light, with manual focus you can approximate the results by using a small aperture and “zone focusing” but this becomes generally infeasible in low light.
Your Own Style
After trying different options, you may find that you are drawn to a particular “look” given by a particular lens used in a particular way. My advice is to explore the range of possibilities in order to find your own personal style. Just because many street photographers use a rangefinder with a 35mm or 50mm lens doesn’t mean that this combination necessarily suits your own photography. I can imagine someone being attracted to using telephoto lenses up close for ultra-intimate candids (though I don’t personally recommend it, I will not be held responsible for any arrests), or using ultra-wides or fisheyes for taking vistas. The important thing is to make it interesting, engaging, and compelling.
If you’re serious about photography and your friends and family consider you handy with a camera, at some point you’ll be asked, “Can you shoot our wedding?” Here’s my point of view, having been in this situation as a relative/friend with a camera way too often.
First point of advice: Run away! You get the occasional cowboy but your working professional wedding photographer actually earns their keep and I have the utmost respect for what they do day-in, day-out. It’s not just about having expensive cameras, even more expensive lenses, or fancy software, what you should be getting when hiring a pro is years of experience handling a high pressure situation and getting the shots when it matters. You don’t get many second chances at weddings and considering how important the photos are to how it will be remembered, the couple should think carefully of not ascribing the appropriate budget to photography, especially considering the total cost of the wedding.
If possible, convince them to hire a professional photographer, one with experience and with a good track record, especially in being able to produce photos in the style preferred by the bride and groom. Just as a marriage requires matching the right people, it is important to get a photographer who will be appropriate for the occasion. The primary photographer should be responsible for the “money shots”, the ones which are the bread and butter of any photo album. Have the couple draw up a list of shots which they would like, especially the group photos which can be endless in their permutational possibilities. Having a trusted friend of the couple (usually the Best Man) handle the logistics of the group shots is one way of reducing your workload to simply taking care of the photos instead of herd management. The last thing you want to have to worry about is where is Aunt Murtle or nephew Johnny? It’s easier if there is someone familiar with friends and family who can go chasing up on errant guests.
Try to enjoy yourself and let the primary shooter get the primary shots, you do what you can to fill in the gaps. If I can get out of being the primary photog for friends, I usually end up doing street photo/candid shots to give a more informal behind the scenes sort of document to the day. When you don’t have the pressure of being first shooter, you can try for the more artistic or risky shots, most of them will fail but you could end up capturing their favourite shot of the day. This may require getting in close to the other guests, or hang back and observing. It depends on the situation and you have to be adaptable. I find that a wide angle lens and being in the thick of the action is a good way of getting fun and intimate shots.
Even as second shooter, I usually have two bodies to reduce lens swapping. One with a 70-200mm telephoto zoom, the other with a 16-50mm standard zoom. I’ll also have a 35mm/1.8 and 50mm/1.4 large aperture primes for low light situations. It is important to anticipate what camera/lens is required at any given moment and to have it ready. For the official ceremony, you should have a good idea of the sequence of events and what shots are needed. Rehearse following the bride down the aisle and the route when leaving the ceremony. Scout the areas where you will do the formal portraits and groups shots. During the more relaxed and informal moments, you have to keep a good lookout for photo opportunities. Kids are invariably cute, especially all dressed up for the occasion.
Preparation will make things run a lot smoother. Planning the day, where you have to be and when, what equipment is needed for which shots, and even the little things like where to park and how to get all your stuff safely to where it needs to be. A checklist can help prevent you forgetting vital tasks.
It is tempting to use your camera as a machine gun, shooting anything that moves. Afterwards you’ll regret having to go through thousands of images. Certainly take those extra safety shots for the ones which matter (groups shots, formals, portraits), but you should try to be selective about the other shots, making sure that each frame counts. It’ll reduce the amount of backing up, editing, and processing you’ll need to do. There’s no harm in keeping shots varied, just don’t get too focussed on repeated shots of the same subject.
Post wedding, you will need to sort, edit and produce a contact sheet for the couple to have a look at. Even at this stage, you should be ruthless in pruning extraneous shots. Couples always have a tendency of wanting even more shots, magnifying your workload. Particularly if you are doing a wedding as a favour or for reduced rates, you want to restrict the amount of unnecessary effort you have to expend. You need to be strict in the number of agreed final photos to deliver. The final output will determine the degree of post-processing required. A set of 6″x4″ prints to put into an album will require only cursory white-balance, levels and curves. Large portrait prints may require a visit to Photoshop to get rid of blemishes, sweat stains (not a task I relish, especially having to manually touch up a hundred or so photos), to smooth skin, and generally make each shot look its best.
I just received my copy of the Samyang 8mm/2.8 fisheye in native NEX mount. I’ve tried it on the NEX-7 and have some very preliminary remarks. First the good news, it’s sharp, light, compact, pretty well made, and matches the form factor of the NEX-7 quite well. The only downside so far is the interaction of the lens with the sensor leading to magenta corners.
This lens colour cast is due to the way that off axis light rays interact with the filter stack on top of the sensor and the pixels themselves. The different coloured sites in the Bayer array are affected differently leading to colour casts. This can be a problem with certain lens designs where the exit pupil is situated close to the sensor (non-telecentricity).
To fix this, you can use CornerFix, a program which takes DNG files and applies a rescaling to the data to compensate for the lens colour cast. One has to first create a reference file from which the compensation can be calculated, not exactly straight forward for a 180 degree diagonal fisheye. I have made such a correction profile and people can download the file (you may need to right mouse click and choose “save link as”, or similar). The lens was set at f/5.6 but the profile should work more or less at other apertures. The correction isn’t perfect, but for most photographic scenery it corrects the majority of the colour cast, as well as the vignetting (which can be adjusted).