Higher contrast = higher resolution?

Dog Opposites

A
Dog Opposites

  • 0
  • 0
  • 14
Acrobatics in the Vondelpark

A
Acrobatics in the Vondelpark

  • 5
  • 0
  • 81
Finn Slough Fishing Net

A
Finn Slough Fishing Net

  • 1
  • 0
  • 58
Dried roses

A
Dried roses

  • 10
  • 7
  • 132
Hot Rod

A
Hot Rod

  • 4
  • 0
  • 90

Recent Classifieds

Forum statistics

Threads
197,457
Messages
2,759,286
Members
99,507
Latest member
Darkrudiger
Recent bookmarks
0

jsmoove

Member
Joined
Feb 8, 2017
Messages
409
Location
Ottawa
Format
35mm
Hey there, I'm trying to understand whether or not a higher contrast in an image equals a higher resolution, in terms of the amount of actual data that is optically resolved, not the quality of the image.
I read: https://en.wikipedia.org/wiki/Minimum_resolvable_contrast
And this part of this forum: https://www.photo.net/discuss/threads/low-contrast-high-resolution-im-confused.194936/
An example I have is, if you have a bunch of tiny QR codes all crammed together in a blob, would they be able to be resolved better if there was more contrast in the overall image by a digital camera lens, or does it make no difference?
Obviously if the image was grey nothing would be resolved, so there has to be a contrast between black and white in order to retrieve data.
There are quite a few different answers in that forum, so I wouldnt mind hearing some fresh ones.
 
Last edited:

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
51,930
Location
Delta, BC Canada
Format
Medium Format
Contrast and resolution are different.
Resolution and "data" are different as well.
Contrast affects the ability to measure resolution. It also affects the ability to retrieve data.
 
OP
OP

jsmoove

Member
Joined
Feb 8, 2017
Messages
409
Location
Ottawa
Format
35mm
How does it affect the ability to retrieve data? I think thats what im most curious about
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
51,930
Location
Delta, BC Canada
Format
Medium Format
The efficiency of the retrieval system will be greater with higher contrast.
If that retrieval system is something like our eyes, it will be more likely to be able to actually observe that two adjacent details are actually two, and not one.
If those details form part of an "edge", systems like our visual systems will be much more likely to identify that edge - we are most able to observe edges.
 
OP
OP

jsmoove

Member
Joined
Feb 8, 2017
Messages
409
Location
Ottawa
Format
35mm
Thats what I suspected. Now when you say efficiency, is that including the lens, or is it mostly on the ability to resolve/differentiate between edges specifically from the system (say a camera + QR) itself?
But more or less you're saying with more contrast in an image, the more data you'll be able to resolve?
 

Kino

Subscriber
Joined
Jan 20, 2006
Messages
7,599
Location
Orange, Virginia
Format
Multi Format
There have been an abundance of articles written over the years that seem to conclude that "apparent" sharpness is enhanced by increased contrast and some grain or noise. That does not mean there is more resolution in higher contrast images, but that it appears to be sharper overall.
 
OP
OP

jsmoove

Member
Joined
Feb 8, 2017
Messages
409
Location
Ottawa
Format
35mm
@Kino Lots of factors I guess. So if you have image #1 that has the same sharpness as image #2, but image #1 has more contrast, technically it will be better resolved? Do you have any articles on hand that talk about this stuff?
 

Kino

Subscriber
Joined
Jan 20, 2006
Messages
7,599
Location
Orange, Virginia
Format
Multi Format
No, it will not be more resolved, it will appear to be sharper. The studies I have read were tests done on film, but the perceptual principals are the same.

I think Barry Thornton in his book, "Edge of Darkness" devotes quite a bit of space to his studies using a very high resolving film VS a lower resolution film and being astounded that the lower resolution film appeared sharper. The upshot, as I remember it, was that the eye needs something to "grab onto" and that the very high resolution images produced by the slower, fine-grained film appeared to be of lower resolution, but was technically much higher resolution.

Contrast in itself should not affect technical resolution, but it does affect how our brain processes the image and give us the impression of more resolution.

That's my understanding...
 
OP
OP

jsmoove

Member
Joined
Feb 8, 2017
Messages
409
Location
Ottawa
Format
35mm
@Kino How confusing! I am looking at it from the perspective resolving abilities of a camera, or a camera/computer system though, not our own vision. Even though in the end it is our own vision.
Im specifically interested in what defines a good "data transfer" literal "computer bits" between image and camera. Is it contrast, sharpness, lp/mm....all of the above?
The only thing I understand currently is that if you have a grey image, you get nothing back except a 1.
If you always have the same image, but just change the contrast or sharpness of said image, do you loose literal bits of information the more contrast there is, or is it just the same information just interpreted differently by the system that is reading it, aka a camera lens.
 
Last edited:

Kino

Subscriber
Joined
Jan 20, 2006
Messages
7,599
Location
Orange, Virginia
Format
Multi Format
OK, I'll give my take on it: I am sure others will have different opinions...

Technical resolving power is not based upon aesthetic properties; i.e., because you make the image so flat or contrasty you cannot perceive the intermediate image tones. The bits and bytes are still there and it is resolving the data just fine, it's just that is now useless in our perceptual domain.

Taken with the same camera with identical settings, a solid gray card image has as much resolution as an image of an intricate clock mechanism.

Literally, you do not affect the "data transfer" of the image from the camera to the computer (assumed here) unless you alter the bits and bytes by using any form of compression or re-interpretation of that data.

You have to separate technical resolution from apparent resolution; one is silicon (computer) the other human (wetware).
 
OP
OP

jsmoove

Member
Joined
Feb 8, 2017
Messages
409
Location
Ottawa
Format
35mm
@Kino Thats a great answer....so it depends on the system (aka, the reader) strictly then? The ability to resolve edges/capture a "unit" of grain, unit being a bit?
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
51,930
Location
Delta, BC Canada
Format
Medium Format
Are you using the system to record data (microfilm and optical data storage) or record images that are intended to be viewed?
If the former, it all turns on the system used to extract that data.
If the latter, then the "system" is the human visual system, and all the discussions about the subjective factor known as "sharpness" come into play.
Our visual systems are much more sensitive to (edge) contrast then they are to resolution.
But a system designed to record and retrieve something like, for example, the billing records for a large power utility, will work differently.
 

Kino

Subscriber
Joined
Jan 20, 2006
Messages
7,599
Location
Orange, Virginia
Format
Multi Format
The bit is an arithmetic abstract of an image value. What that represents is dependent upon who views it and their cultural and environmental baggage, but as Humans we do appear to have an underlying, common visual language to a certain extent...

+1 for Matt's observations above...
 
OP
OP

jsmoove

Member
Joined
Feb 8, 2017
Messages
409
Location
Ottawa
Format
35mm
@MattKing It would be for optical data storage. So it is reliant on the system then. Is a good example is the IBM 1360 photostore chip? Things have changed rapidly since then though.
@Kino I think in barcode terms, they even have something called a "mil" which is their unit of measurement for a barcode reading system.
I love how this has gone to almost philosophy.
So an "efficient" overall system has obviously both a reader and what needs reading, but for retrieving maximum "computer" data, are there any particular tips for a modern imaging system?
 

Kino

Subscriber
Joined
Jan 20, 2006
Messages
7,599
Location
Orange, Virginia
Format
Multi Format
Without knowing the particulars, it is difficult. In the motion picture field, we rely on GPU accelerated systems directly communicating on a 16x PCI buss, most typically NVidia CUDA architecture, for high resolution image processing. You'll need a robust RAID for capturing and a good SSD cache drive to avoid over runs of imaging buffers. Real-time analysis of extreme data rates would probably require discreet capture and processing systems, joined by a massive data buss...

A scanner I used for motion picture work had an extensive calibration scheme that used "pixel leveling" algorithms to calibrate each individual pixel to a flat field light source in an attempt to minimize noise introduction into the imaging pipeline; the theory being that if you push the threshold of the imaging sensor well above and below the threshold of noise and "Zero" it out to pure "white" and "black" you would avoid thermal artifacts. This narrowed the resolution of the imaging target somewhat, but offset that loss by largely eliminating the noise without having to resort to liquid cooling blocks on the target sensor.

I don't know what else to suggest without more details...
 
OP
OP

jsmoove

Member
Joined
Feb 8, 2017
Messages
409
Location
Ottawa
Format
35mm
@Kino Im particularly interested in finding out what the maximum theoretical amount of data can be resolved by a mobile phone camera. I dont have a particular system in place just yet, but I was hoping to figure out if there's a way to make an average guesstimate of all mobile phones somehow using maths. In general the range for a mobile phone camera is in the 8-12 megapixels +. I know you can resolve a small amount of information with QR codes, but why is this, what is the biggest roadblock, is it the limitations of the lens or software?
Scanners are probably an easier topic...I've tried them and had much better luck. Does software or code such as the pixel leveling example exist for any computer camera?
 

trendland

Member
Joined
Mar 16, 2012
Messages
3,400
Format
Medium Format
Hey there, I'm trying to understand whether or not a higher contrast in an image equals a higher resolution, in terms of the amount of actual data that is optically resolved, not the quality of the image.
I read: https://en.wikipedia.org/wiki/Minimum_resolvable_contrast
And this part of this forum: https://www.photo.net/discuss/threads/low-contrast-high-resolution-im-confused.194936/
An example I have is, if you have a bunch of tiny QR codes all crammed together in a blob, would they be able to be resolved better if there was more contrast in the overall image by a digital camera lens, or does it make no difference?
Obviously if the image was grey nothing would be resolved, so there has to be a contrast between black and white in order to retrieve data.
There are quite a few different answers in that forum, so I wouldnt mind hearing some fresh ones.
NO - but is is the typical mistake!
Higher contrast has noting to do with resolution!
There is of course an exeption : if you have a contrast like nothing = extreme low contrast
(London with heavy fog) - then a normal contrast will bring more resolution (Is the fog away you can see the clock of Big Ben).
High contrast isn't the same as "sharpness" - the next mistake! Edge effects can not higher the resolution - next mistake! But highest contrast and also edge effects let look a picture much sharper! That is a "Pseudo" sharpness!
By the way : Who play with ? Manufacturers of digital equipment = new models wich can't serve more resolution from a smallest sensor have edge sharpness at extreme level and highest contrast! So the model with 48MP is indeed better from resolution in comparison to the model you have bought a year ago from 39MP = a big illusion!
From what you can get more resolution 1) from higher format 2) from better lens 3) from low speed Films!

with regards
 

nmp

Member
Joined
Jan 20, 2005
Messages
1,995
Location
Maryland USA
Format
35mm
@Kino Im particularly interested in finding out what the maximum theoretical amount of data can be resolved by a mobile phone camera.

Theoretically, the highest resolution that can be obtained on a sensor would be (pixel per mm)/2 in lpm. However, that is not achieved because of the noise that is introduced at various points in the process. Additionally the lens itself in conjunction with the aperture is limited to its own resolving power, beyond which the camera sensor can not achieve.

In other words, it is complicated. There is no magic number for your question that is applicable to all mobile phone cameras.
 

trendland

Member
Joined
Mar 16, 2012
Messages
3,400
Format
Medium Format
Theoretically, the highest resolution that can be obtained on a sensor would be (pixel per mm)/2 in lpm. However, that is not achieved because of the noise that is introduced at various points in the process. Additionally the lens itself in conjunction with the aperture is limited to its own resolving power, beyond which the camera sensor can not achieve.

In other words, it is complicated. There is no magic number for your question that is applicable to all mobile phone cameras.
Generally the criteria above ca. 10 - 25 MP (that is our situation since many years now) ...so
above this value of MP the criteria is the size of the sensor!
Because our OP is asking : Smartphones have sensors from smallest size!
It seams to be that the operational rendering power is soo much high that noise of smallest sensors can reduced massive via alguritmus of noise reduction !

Otherwise it is physically not possible to get a picture of 30MP from such smal sensor type
without massive reduced resolution from noise!
But with the same electronic power this "tuned" pictures are better the more space the sensor can use!

with regards
 

Adrian Bacon

Member
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
@Kino Im particularly interested in finding out what the maximum theoretical amount of data can be resolved by a mobile phone camera. I dont have a particular system in place just yet, but I was hoping to figure out if there's a way to make an average guesstimate of all mobile phones somehow using maths. In general the range for a mobile phone camera is in the 8-12 megapixels +. I know you can resolve a small amount of information with QR codes, but why is this, what is the biggest roadblock, is it the limitations of the lens or software?
Scanners are probably an easier topic...I've tried them and had much better luck. Does software or code such as the pixel leveling example exist for any computer camera?

This is actually very simple to do. In order for a digital sensor to capture a transition from dark to light (or vice versa) you need at least 2 pixels, one to capture the dark area and one to capture the light area. This is commonly referred to as a line pair. If your digital sensor is monochrome (Bayer array sensors are the same principal, but a little more complex because they’re full color), you simply take the horizontal resolution of the sensor and divide it by two, do the same thing vertically and you now have the absolute theoretical maximum number of line pairs that sensor can capture.

In digital sensor land, digital sensors contrast response is typically 100% all the way up to the theoretical maximum resolution, so contrast doesn’t really come into play until you put the lens into the equation, but even then, on modern digital sensors you’ll very quickly discover that you have more problems resolving fine detail onto the sensor because of things like diffraction, focus, and other optical aberrations. A 24MP APS-C sized digital sensor has 6000 pixels spread out over 24mm for a total of 250 pixels per mm, or 125 line pairs per mm of sensor. You need a pretty good lens to actually resolve 125 line pairs per mm onto the sensor.

You mentioned QR Codes. If you stop and think about it, a QR Code is really just a bunch of transitions from dark to light, or line pairs, so assuming a theoretically perfect system, it’s pretty straightforward to apply the two paragraphs above to what you’re trying to solve for.

Now for the reality: in the real world, you won’t get anywhere near that, in fact you’ll be lucky if you manage half that as your total system resolution is only going to be as good as the lowest resolution part of your setup, which is the lens. Digital camera sensors have out resolved the vast majority of lenses for quite some time, and its only going to get worse as camera manufacturers keep adding more and more sensor resolution.
 

RalphLambrecht

Subscriber
Joined
Sep 19, 2003
Messages
14,560
Location
K,Germany
Format
Medium Format
Hey there, I'm trying to understand whether or not a higher contrast in an image equals a higher resolution, in terms of the amount of actual data that is optically resolved, not the quality of the image.
I read: https://en.wikipedia.org/wiki/Minimum_resolvable_contrast
And this part of this forum: https://www.photo.net/discuss/threads/low-contrast-high-resolution-im-confused.194936/
An example I have is, if you have a bunch of tiny QR codes all crammed together in a blob, would they be able to be resolved better if there was more contrast in the overall image by a digital camera lens, or does it make no difference?
Obviously if the image was grey nothing would be resolved, so there has to be a contrast between black and white in order to retrieve data.
There are quite a few different answers in that forum, so I wouldnt mind hearing some fresh ones.
Yes,since higher contrast makes it easier to differentiate between white and black lines, it usually increases the measured resolution.That is why resolution ,measured in lp/mm is typically referenced to the contrast of the target. RIT makes resolution targets of different contrast and film resolution is often measured with high contrast targets.
 

Theo Sulphate

Member
Joined
Jul 3, 2014
Messages
6,492
Location
Gig Harbor
Format
Multi Format
... Digital camera sensors have out resolved the vast majority of lenses for quite some time ...

My curiosity about that leads to these questions:

1. Is it possible to etch on glass or imprint on paper 250 line pairs (or more) per millimeter? I think so, because I worked with Compugraphic phototypesetting equipment which claimed 5000 dpi (fonts and symbols were stored electronically on disk and film was scrolled past a light emitting surface).

2. If (1) is true, what optical limitations prevents those 250 line pairs from being faithfully projected from the source to the sensor? That is my real question.
 
OP
OP

jsmoove

Member
Joined
Feb 8, 2017
Messages
409
Location
Ottawa
Format
35mm
Thanks everyone! @Theo Sulphate
I second your questions Theo! Thats actually something im thinking about too in relation to my original question.
I was originally looking at lithophanes, is that in the same ballpark as your thinking?
Im curious to know if you can etch/emboss in a high dpi/lpmm.
Did you manage this with "compugraphic phototypesetting equipment"? Can you elaborate?
 
Last edited:
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom