It's just words, words, words: Describing and interpreting film performance.

Dried roses

A
Dried roses

  • 0
  • 0
  • 17
Hot Rod

A
Hot Rod

  • 1
  • 0
  • 39
Relics

A
Relics

  • 0
  • 0
  • 32
The Long Walk

A
The Long Walk

  • 1
  • 0
  • 53
totocalcio

A
totocalcio

  • 4
  • 2
  • 86

Recent Classifieds

Forum statistics

Threads
197,451
Messages
2,759,165
Members
99,501
Latest member
Opa65
Recent bookmarks
0
OP
OP
aparat

aparat

Member
Joined
Sep 5, 2007
Messages
1,177
Location
Saint Paul,
Format
35mm
I believe that's because it's already factored into his developmental model. It's the same tweak that Adams basically does. The development model in WBM is shown on page 128 fig 2. It appears to use a LER / NDR of 1.20.

View attachment 332440

Interestingly, these numbers are very close to the ones I've obtained using my practical flare model. Flare reduces the effective luminance, but if you don't use flare to reduce the luminance range variable, then you need to increase the LER / DR variable to compensate. A constant if you will. Working backwards from a normal negative of 0.58, each stop of the luminance range would average 0.30 * 0.58 = slightly over 0.17 units of density per stop of luminance. To keep it simple, a 7 stop luminance range would result in a negative density range of 7 * 0.17 = 1.19. Approximately what WBM uses. As we know in reality, average flare reduces the seven stop luminance range to six stops so the resulting negative density range should be more like 6 * 0.17 = 1.02.

On page 62 fig. 6, WBM has grade 2 as ranging from 0.95 to 1.15 with the average as 1.05. 1.20 falls outside the range of a grade 2 paper. For the math to work when determining a CI, you need to add an extra 0.15 density range to the grade 2 aim density range of 1.05 in the equation or subtract flare from the luminance range. The end result is the same. Only one reflects reality.

Thank you for pointing it out! It's been a while since I read that chapter, and I am glad you brought it up because it made me go back and re-read it. Yes, the NDR of 1.2 is kind of their thing, as well as the speed point at 0.17 over B+F, if I remember correctly. It turns out I had put some of these calculations in my code, I just forgot I did it. It's not the first time. My memory is not what it used to be.

I have a question, if you don't mind. I have been reviewing my code for paper testing, and I came across a function I wrote that calculates ISO paper grades. I just don't remember where I got the formula from. My notes say "1992 ISO standard" so I am wondering if it's this one:

"ISO 6846:1992 Photography — Black-and-white continuous-tone papers — Determination of ISO speed and ISO range for printing"

Do you happen to know if this is the current standard? Thanks! I am grateful for your input. I always learn something new.
 
Joined
Jan 7, 2005
Messages
2,603
Location
Los Angeles
Format
4x5 Format
1678856062789.png
 
OP
OP
aparat

aparat

Member
Joined
Sep 5, 2007
Messages
1,177
Location
Saint Paul,
Format
35mm
As usual, @Stephen Benskin and @Bill Burk pointed out some really interesting ideas related to describing film performance. This time, I went back to Way Beyond Monochrome (Lambrecht and Woodhouse, Fountain Press, 2003) and re-read the relevant chapters. I decided to take a deeper dive and see if I can offer analysis and visualization tools for their model. I would imagine a lot of photographers have read the book and tried to implement the model into their own workflows. The book contains templates that can be printed out and used to analyze the data. That analysis is done mostly visually, with some simple calculations needed. This tool, however, offers a computer-aided analysis. The idea is to just enter the data into a spreadsheet, set a few preferences, and run the program.

I am asking for your feedback. This is just a first attempt at creating these visualization tools. My goal is to create a tool that is as simple as possible, both to use and to interpret, but without leaving important bits out. Bear in mind, this model, just like most other existing models, is not perfect, but I think it can be implemented successfully.

The gist of the WBW model is on p. 128. The idea is that there's a simple, linear relationship between "subject brightness range" and the Zone System (represented by the N-numbers), where seven stops is considered normal, and each stop (or EV) of brightness range in either direction, corresponds to a single step along the N-numbers continuum.

tmax400sbrNPlot.png


The next important relationship is between the average gradient (Ḡ) and the Zone System (N). It's important to point out that the authors consider a density range of 1.20 to be best suited for a contrast grade-2 paper in combination with a diffused light source. As @Stephen Benskin pointed out, this is not as simple as it appears, but, for practical purposes, it can be assumed to be a reasonable approximation. For a condenser enlarger, a different value may be needed, and the tool needs to be able to accommodate that.

tmax400gNPlot.png

The analysis of the film exposure and development, in my opinion, can be summarized by four plots. I will be using my TMY-2 data for this as an example.

The first plot shows the relationship between the average gradient and development time. The curve represents the actual data, but the N-number labels are derived from the WBW model. Essentially, you can look up any combination of time and Ḡ and compare it to the Zone System (N). The blue line represents the normal development for an average gradient of 0.57. I decided to only include the range of N-2 to N+2 here, but I would leave it up to the user to decide on the range. In the book, this plot is on p. 138.

tmax400gTimePlot.png


The next important view of the data in the WBM system is the relationship between development time and the Zone System (N). Here, we can compare the actual values (from our TMY-2 data) and their relationship to the WBW model. We can see that a "normal" development time in XTOL-R is a little under 9 minutes in a Jobo processor at 20C. The same, of course, can be read from the plot above, but that's just a different look at the same data, so to speak.

tmax400zoneTimePlot.png

Next, the plot of relative log exposure and the average gradient. In the book (p. 139), they recommend to use it as a tool for estimating effective film speeds of the curve family. I think that presenting such data in a table might be more effective. I am still not sure how to go about this. This is a somewhat unusual plot because, often, we associate the average gradient with development time, rather than exposure, so it can be potentially misleading. There is another wrinkle here, namely, one needs to do an additional test to determine normal film speed, and then use this plot to estimate effective film speeds in relation to film speed obtained in that test. I guess I am making it sound more complicated than it is. In essence, a 0.1 change in log exposure equals 1/3 stop in film speed. The EI values at the bottom of the plot are rounded such that they should be easy to use with an exposure meter. One of the important conclusions is that the "normal" film speed, or EI, is around 320.

tmax400gLogExpPlot.png

Finally, there's the plot of effective film speeds and the Zone System (N). This plot shows how "sensitivity decreases with development contraction." This plot is supposed to instruct the photographer regarding their exposure and development decisions in the field. Unlike the previous plots, this one does not include the actual TMY-2 data, but, instead, it shows the relationship between the N-numbers and film speed, derived from the actual data. Again, I am not sure how clear this distinction is. Please, let me know what you think.

tmax400zonezoneEFSPlotPlot.png

I hope to put together an installment about testing paper, and then relating film and paper data to each other. More to come.
 
Joined
Jan 7, 2005
Messages
2,603
Location
Los Angeles
Format
4x5 Format
@aparat, that's how the book does it, but it does not necessarily represent reality. What I don't understand with WBM is how they properly explain basic theory but have a disconnect in logic when applying it. Using 0.17 as a speed point conflates film speed and exposure placement. Yet the book discusses the importance of the fractional gradient method. On page 137, it uses the equation 1.20 / 2.10 = 0.57. The book discusses flare and arrives at a normal gradient that is almost identical as Kodak's, but it only does so by changing the value of the LER and not including flare. Right answer, but...

On page 62, the book has a breakdown of the LER for the grades with Grade two being 0.95 - 1.15 and 1.05 being the average. On page 48, the book has a diagram similar to one that I posted earlier. They use the 1.05, but use a different placement on the negative to justify the use of the 1.20 and 1.05, and the book uses 1.20 throughout. The diagram forces the aim ranges as starting on Zone I 1/2 to VIII 1/2 for the 1.20 density range and from Zone III to Zone VIII. How is this supposed to work based on tone reproduction theory? And the diagram actually has a tier showing the affects from flare but yet later doesn't factor it in.

This is a similar diagram from Photographic Materials and Processes:

1679268849690.png


This graph is from my paper What is Normal. It shows how three of the models can all have normal fall at the same CI, but depending on the variables, can yield very different results as you move further away. I believe it's a good example of why most approaches work for the majority of cases, and that's because most situations fall close to the normal conditions. The no flare curve curve is based on an aim LER of 1.05. If you change the aim to 1.20, it will shift up and intersect at the normal point. You might notice how the curve shape of the no flare curve is similar to the Practical Flare curve. It's just a coincidence.
1679267350006.png

One of the rationales for the Practical Flare model, which is between the results from the fix flare and variable flare models, comes from Jones, “for the soft papers, the density scales of the negative (DR) should in most cases exceed the sensitometric exposure scale of the paper (LER), whereas, for the hard papers, the density scales of the negatives should in most cases be less than the sensitometric exposure scale of the paper (LER).”

And about a fix density method to determine film speed. From C.N. Nelson's Safety Factors in Camera Exposure:

"The fixed-density criterion tends to underrate films that are developed to a lower average gradient and to overrate films that are developed to a higher average gradient."
 
OP
OP
aparat

aparat

Member
Joined
Sep 5, 2007
Messages
1,177
Location
Saint Paul,
Format
35mm
@Stephen Benskin Thank you for this demo! I took the liberty of implementing your Practical Flare model in my own code. Of course, I will always cite your fantastic work whenever I make a reference to it. By the way, I modeled the Practical Flare model by interpolating the Fixed and Variable flare models. I hope that is correct?
CIByFlareModel.png


Speaking of WBW, when I first read the book years ago, I remember being confused about their definition and application of "N-numbers." It wasn't a bad thing, as it prompted me to look for other sources on the subject, including Photrio, to figure out how to interpret their definition(s). Still, it seems that they sometimes consider N-numbers as being descriptive of one's own data, and sometimes they consider it as being prescriptive, or as the target for one's exposure and development. For example, in Fig. 10, p. 138, they seem to plot their own experimental data for which probably computed the N-numbers based on the formula in Fig. 9, p. 138. However, later, in Fig. 12, p. 139, they plot the Zone System definition of N-numbers (derived from the LSLR of the scene) against EFS derived from their experimental data. Now, I do understand what they mean, but I distinctly remember being confused about it.

One other detail they don't seem to adequately explain is how they fit the curves in a lot of their plots (e.g., Fig. 10, p. 138). They just plot the curves that seem to fit the data, but no explanations are given as to how they do it. Perhaps it is a minor issue.
 
Joined
Jan 7, 2005
Messages
2,603
Location
Los Angeles
Format
4x5 Format
When all the variables are represented, it's easy to apply it to almost any situation. One interesting observation, the results from the variable flare model are close to Kodak's values for "pushing for speed."

This uses 0.40 for flare at the statistical average luminance range, and the variation in flare with the luminance range would hit 0 at around 1.20, so I've locked it at 0.10 at that point. With shorter luminance ranges, where the minimum exposure falls probably is the greatest influence on flare.
Practical Flare Model b.jpg
 
OP
OP
aparat

aparat

Member
Joined
Sep 5, 2007
Messages
1,177
Location
Saint Paul,
Format
35mm
When all the variables are represented, it's easy to apply it to almost any situation. One interesting observation, the results from the variable flare model are close to Kodak's values for "pushing for speed."

This uses 0.40 for flare at the statistical average luminance range, and the variation in flare with the luminance range would hit 0 at around 1.20, so I've locked it at 0.10 at that point. With shorter luminance ranges, where the minimum exposure falls probably is the greatest influence on flare.

Very interesting! I also locked variable flare calculation at 0.1, for the same reason.

By the way, I am not sure what you mean by Kodak's values for "pushing for speed?" Do you mean that big Kodak table (with data for condenser and diffusion enlargers) or something else? Thanks!
 
Joined
Jan 7, 2005
Messages
2,603
Location
Los Angeles
Format
4x5 Format
Very interesting! I also locked variable flare calculation at 0.1, for the same reason.

By the way, I am not sure what you mean by Kodak's values for "pushing for speed?" Do you mean that big Kodak table (with data for condenser and diffusion enlargers) or something else? Thanks!

In the various Kodak development charts they have aim CIs depending on the EI rating. TMX at 100, TMX at 200, etc. Developing for the EI setting as opposed to luminance range.

1679662801919.png
1679662843628.png


The difference in CI for Kodak and non-Kodak films emphasizes how Kodak is no longer a fortune 500 company. This is only speculation, but someone new probably decided to go back to their classic CI 0.56 for normal but they didn't have the money to retest non-Kodak films, so they used the data from the time when they considered CI 0.58 normal. It's not a good look.
 
Last edited:

miha

Member
Joined
Feb 15, 2007
Messages
2,920
Location
Slovenia
Format
Multi Format
@aparat, thanks for your work, I appreciate it a lot. I would suggest that you collect all your tests of film/dev combos and repost them in a single post and make it sticky (the mods of course) as there is so much valuable info spread on several threads. Thanks again for your time.
 
OP
OP
aparat

aparat

Member
Joined
Sep 5, 2007
Messages
1,177
Location
Saint Paul,
Format
35mm
@aparat, thanks for your work, I appreciate it a lot. I would suggest that you collect all your tests of film/dev combos and repost them in a single post and make it sticky (the mods of course) as there is so much valuable info spread on several threads. Thanks again for your time.

Thanks! Yeah, I think this would be a good idea. I am going to go through my data and make the plots again so they're all consistent, and then I will try to put it all together. It may take me a while, but I'll get it done.
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
51,928
Location
Delta, BC Canada
Format
Medium Format
Be sure to break your posts into discrete chunks!
 

redbandit

Member
Joined
Nov 30, 2022
Messages
440
Location
USA
Format
35mm
The only part of the "perfect" negative approach that isn't well suited to roll film use is the part that involves tailoring development to a single negative.
But even with that in mind, expansion and contraction development tools are still useful any time you expose an entire roll under similar lighting conditions.
This image is from a roll that I used increased development, due to relatively flat, high overcast lighting that was consistent throughout the day.
View attachment 329371

On the subject of the thread, what I would like to see is a combination of graphs, juxtaposed with example photographs and descriptive words, in order to be able to associate the three descriptive tools.
By the way, the way you overlaid the Zone indicators with the curves in post #44 was really useful!

What was the method used for working with the shadow when taking the actual meter settings? Ive tried shots like this and ended up with negatives that looked good on my light table, but when printed had nearly impossible ability to tell branches apart.
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
51,928
Location
Delta, BC Canada
Format
Medium Format
What was the method used for working with the shadow when taking the actual meter settings? Ive tried shots like this and ended up with negatives that looked good on my light table, but when printed had nearly impossible ability to tell branches apart.

I usually take an incident meter reading from the position of the subject to set the foreground exposure correctly. I'll often evaluate that setting by exploring what reflected readings would tell me about some particular lighting conditions, but in this case the high overcast lighting made it a situation where no further adjustment was needed, save and except some minimal fine tuning - small amounts of dodging or burning at the time of printing.
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom