Vischeck logo  

User quotes:
I'd like to say thank you for such a surprising and extremely worthwhile online service. So thank you - it was an education.
-Chris H., Australia
Web Vischeck
Wikipedia Affiliate Button

Frequently Asked Questions

Belorussian translation


Vischeck Classic and General Vischeck questions




Is there a cure for colorblindness?

No. Color blindness is almost always caused by an inherited condition that alters the photoreceptors (cone cells) in the eye. There is no way to restore normal function to these cells.

What causes colorblindness?

See Webexhibits for an excellent description of the various causes and types of color blindness. You might also like to read this artice by Alex Wade.

I need to pass a color blindness test for work. What can I do?

Some jobs require their employees to take a color blindness test (often using the Ishihara plates). These tests are required by, among others, the FAA, the coastguard and most military and emergency services. Such tests generally prohibit the use of colored contact lenses or other devices that are claimed to alleviate the effects of color blindess. Unfortunately, if you really are color blind, there is very little you can do to pass these tests.

Will my child inherit color blindess?

Color blindness is usually caused by a problem in a gene carried on the X chromosome. Men do not pass their X chromosomes onto their sons, so a color blind man cannot pass his color blindness onto his son. Women, having two X chromosomes, can carry the color blindness gene and never know it. If there are men on the mother's side of the family who are color blind, there is a chance that her child will inherit this gene. It will usually only cause color blindess if the child is a boy. Very rarely, a mother will have color blindness herself. This means that she has two 'color blind' X chromosomes. If she has a boy, he is almost certain to be color blind. The fact that the color blindess gene is on the X chromosome is why men are about ten to twenty times more likely to be color blind than women.

According to a color test I'm red-green color blind, but I can tell the difference between red and green- how can this be?

It is certainly possible to have a red-green color deficit but still be able to distinguish many shades of 'red' from many shades of 'green'. In fact, color tests carefully select specific shades of red and green that are indistinguishable to people with a color deficit. Also, there are various degrees of color blindness. Someone with a mild deficit would be able to distinguish more reds and greens than somone with a more severe deficit.

How many people are colorblind?

About 8% of all males have some sort of color deficit, but for females it is about 1/2% (see the webexhibits causes of color site). Assuming that the current world population is 6,552,504,794 (see the US Census Bureau population clock) and that half of those are male, this gives: 6,552,504,794 * 0.5 * 0.08 + 6,552,504,794 * 0.5 * 0.005 = 278,481,454.

Of course, we don't really know the numbers so exactly, so we can round off our estimate. So, I would say that a good estimate is about 280,000,000 color-deficient people in the world today (October 24, 2006).

If you want to break it down by type of color deficit and/or severity, you can use the estimates from the webexhibits causes of color site to compute your own values.

Will a color deficit prevent me from becoming a pilot?

Good color vision is important for recognizing various lights and signals important to a pilot, especially at night. In the United States, the FAA requires all pilots to have 'the ability to perceive those colors necessary for the safe performance of airman duties.' You can get much more information about the issue from the Pilot Medical Solutions, Inc website.


Vischeck Classic and General Vischeck questions

Nothing happened when I hit 'Submit'.

How big is the file you are uploading? How fast is your internet connection? Uploading large files will take time. For example a 1MB file will take about 3 to 4 minutes to upload using a normal 56k modem. Use compressed JPEG files to reduce upload time.

The images don't show up properly: The next page appears but my images aren't on it, just some "broken image" icons.

Vischeck will accept 'most common image formats'. This includes JPEGs, GIFs, BMPs, PNGs and most TIFFs. Unfortunately, Sod's law says that the format you just chose is unsupported. We know there are currently problems with CYMK TIFFs, and some Adobe Illustrator, PDF and Postscript files. Vischeck uses the free ImageMagick image processing library and it doesn't handle everything. On the other hand it's free. We suggest that you try another format if you can. If not, send your small images to us and we'll see if we can sort them out for you.

The images popped up but the simulated version is a funny color or too blurred or too small.

The default setting for the color simulation model is "deuteranope - red/green color blind". If you just want to simulate the effect of distance, you have to select the 'Normal' color vision type. Note that Vischeck shows you how the image you just submitted would look at a distance- that is, the 'actual' image that you submitted as it would appear on a normal computer screen. If it's a tiny image, it will become very blurred very quickly because that's what happens to small things at a distance. Also, to save our processing power and your download time, large images are resized before processing. The maximum size is 410x410 pixels. If you just submitted a big 1280x1024 pixel screenshot, the simulation will operate on a smaller version. This means that the result will be blurrier than you think it should be. Get around this by breaking critical parts of the image up into samller chunks and running separate Vischecks on each part. Or, contact us and we can run a full-sized image for you.

I'm not getting the same results from the online version of Vischeck and the VischeckJ plugin. Some of the simulated colors are noticeably different. Are the algorithms in the two versions of Vischeck different?

You aren't doing anything wrong- the older version of VischeckJ and the online Vischeck did indeed give slightly different results. However, these minor differences do not reflect an error in one implementation or the other, but rather different assumptions about the display device used to present to original image. We have since adjusted the display parameter assumptions in VischeckJ to match those of the online simulation, so they should now provide very similar results. The Vischeck color deficit simulation algorithm makes various assumptions about the display used to view the original image. In other words, the simulation is (by necessity) a simulation of how the original image would look when viewed on a particular display by a person with a color deficit. The ideal simulation would be customized to your particular display, which would need to be characterized via photometric measurements. We do this type of careful calibration for our lab studies of vision, but it wouldn't make for a very user-friendly service to require everyone to calibrate their display before using Vischeck! We must keep in mind that these simulations are merely a rough approximation of what a person with a color deficit would see, so we shouldn't take the precise color values too seriously. Also, there is quite a bit of individual variance in the perception of color, so an exact interpretation of the simulation results is not useful when you want to generalize to a whole population of individuals (like "all deuteranopes"). The Vischeck simulation should be viewed as a rough guide to how the world appears to someone with a color deficit.



Why doesn't VischeckURL work on my site?

To simulate an entire webpage, VischeckURL must parse the html code of that web page. Parsing html and all the 'extras' that have been tacked onto it (CSS, Javascript, Flash, ActiveX, etc.) is no trivial task. We have concentrated on making VischeckURL robust when parsing plain HTML and have added some functionality for working with CSS and Javascript. When we have more time and resources we will add extra feature (see below).

When will VischeckURL work for my site?

Unfortunately, we have day-jobs that take precedence over Vischeck, so we can't set a firm update schedule. However, if you sign up for our mailing list, we'll send you a note when the code is updated.



How did Daltonize get its name?

Daltonize was named after John Dalton, the person who first wrote about colorblindness in 1794. See the Wikipedia entry on John Dalton for more information.

How does it work?

Color images like this one:

trichromat cells

can be split into three color 'dimensions'. One way to do this is to split them into the red, green and blue planes used to control video displays:

cells- split into r,g,b planes

Another way is to split them into a bright/dark dimension, a red/green dimension and a blue/yellow dimension.

cells- split into perceptual color  space

This is something like what the human visual system does. Most color blind people (dichromats) cannot see the red/green dimension of an image. But they still see the blue/yellow and light/dark dimensions.

So the Daltonize algorithm analyzes the image to see if there is significant information (variation) in the red/green dimension. If there is, it tries to convert this into variations in the light/dark and blue/yellow dimensions. It does this intelligently so that if there is already significant variation in one or the other of those dimensions, it will not get lost. It also stretches the red-green dimension so that people who have partial color blindness have more of a chance to see red-green contrast.

Won't it just introduce other color confusions?

Daltonize tries to maximize the information available to dichromats. By shifting information from the red/green variations into the light/dark and blue/yellow dimensions, there is a chance that it will reduce contrast in those dimensions. However, by analyzing the variations in all three color dimensions beforehand, these problems are kept to a minimum. In practice, we rarely see a case where Daltonize introduces color confusions where none existed before.

The colors look funny / unnatural

Our aim is to increase the visibility of things that would normally be invisible to color blind people. To do this, we vary the colors along the blue/yellow and light/dark dimensions. This can make some things look odd (for example, it might make red apples look a little blueish). You can make things look a little better by forcing Daltonize to only use the light/dark dimension to compensate for red/green variations. But ultimately, the trade off is between having things look a little strange and not being able to see them at all.

It doesn't seem to do anything.

If your image contains very little variation in the red/green dimension, Daltonize has no work to do. In other words, the image will be just as visible to a color blind person as a normal. Congratulations! If you want, you can force Daltonize to work harder by increasing the numbers you enter in the parameters box. Sometimes, the parameters you enter in the boxes (the numbers underneath the filename) will prevent Daltonize from doing much to the image. For example, if there is a lot of variation in both the red/green and luminance dimensions and you force Daltonize to map red/green variations to luminance only, it will not be able do much: it will try to preserve the information in the luminance direction and this will reduce the amount of variation it can introduce from the red/green dimension. Allowing it to use the blue/yellow dimension as well might results in a significant improvement.

What do the numbers in the boxes mean exactly?

The top number is the red-green stretch factor. It says how much the Daltonize algorithm will stretch the red-green axis of the input image. A large RG stretch will make reds redder and greens greener. Why is this useful? Many people who are color blind actually have some limited red/green vision. By stretching the RG axis in an image, these people will see red/green variations that would not normally be visible to them. Setting this value to '1' means that the length of the red/green axis will be multiplied by '1' - in other words, it will not change. A reasonable value for this setting is about 1.2 or 1.3.

The other two numbers are the luminance and blue/yellow projection scales. These tell the Daltonize algorithm how much it is allowed to project red/green variations into the luminance and blue/yellow dimensions. For example, setting the luminance projection scale to zero means that the algorithm will not translate any red/green information into variations in lightness and darkness.

In particular, you might want to experiment with the blue/yellow projection scale. If images look 'unnatural' after passing through the Daltonize algorithm, it is usually because variations in the blue/yellow dimension are being introduced: perhaps red things look blue or green things look yellow. Setting this value to '0' (no projection onto the blue/yellow axis) will make Daltonized images look more natural at the expense of reducing the efficiency of the Daltonize algorithm somewhat.


RG StretchLum ScaleB/Y ScaleRESULT
100No change in the image
1.300Only a red/green stretch applied
11.30Red/green information transformed to luminance variation (Reds look darker)
1.2-1.30.2Red/green stretch, red/green information transferred to blue/yellow and luminance variation (Reds look brighter and bluer)

Why not distinguish reds and greens with different texture patterns / flashing colors / borders?

We have tried to make the color transformation that Daltonize performs as subtle as possible. This has the advantage that the images it generates still look reasonable to someone who is not color blind. Our tests with other methods of increasing red/green contrast led us to conclude that the algorithm we present here is a good one. But if you come up with something even better we'd be interested to hear from you!

Why is it so slow?

Because a lot of people are using it and it's written in Java and the image processing engines are mid-range home PCs operating over domestic DSL connections. If you want to donate some equipment or bandwidth to the Vischeck team, we'll be happy to take it off your hands. :)

Nothing comes up when I submit an image

Probably the connection is timing out due to bandwidth and processing restrictions (see above). There is also a chance that one of the image engines is down. In either case, you might like to wait for a few minutes and re-submit the image. Avoid using large images or images in unusual formats: stick to JPG, BMP, PNG, GIF and TIFs. If they transfer at all, they will be re-scaled when they are served back to you.

Can I use the images this generates in a book / newspaper article / web page / magazine / Hollywood movie?

In general, yes. Two restrictions: If you're going to make money out of the images you'll have to ask us first. And if you use them for illustrative purposes, tell us, credit us and mention the website

Can I get the Daltonize engine as a stand-alone package (like the Vischeck engine)?

Not at the moment. But it's all written in Java (see above) so in principal we could release a stand-alone version quite easily. We'll do this if there's a lot of demand so let us know.

My company would like to use this in a commercial product.

The Daltonize algorithm was originally developed at Stanford Universitiy. Commercial rights are now held by the inventors. Contact us at for information on licensing the software.


Privacy policy. Contact: Last modified 2010-Dec-15 20:17 GMT.