L Gebhardt
Member
I'm looking to start using Delta 100 with Pyrocat and I though I'd do a simple film test using a step wedge with a densitometer. My end result is printing on variable contrast papers. I use a home built LED head with green and blue LEDs which I have calibrated for different papers. I exposed a sheet of film through a step wedge under an enlarger using an incandescent bulb for 1/2 second and developed it for the recommended time for Pyrocat MC.
With normal developers there's very little difference between the blue and the green readings and I just use the visible channel to make a density plot for contrast measurements. But with the stain there's about a stop difference on the densest patch (1.31 Green vs 1.57 Blue). I'm looking to see if my thinking about this is correct.
If I plot the green and blue curves I'll end up with two different slopes with the blue being higher contrast than the green. My hypothesis is that when printing at normal grade the effective contrast will be about 1/2 way between the two since the ratio of blue and green light is approximately equal. I'm thinking this will result in a lower contrast contribution from the green and a higher contrast contribution from the blue. It would then go that as the grade is reduced print will be even less contrasty than expected since the low contrast green curve will effect the print more, and the opposite will occur with increasing the grade.
Based on that I expect the print to respond more quickly to contrast changes, but to otherwise print the same. Does that match your experience?
Ultimately I'm wondering if I can design a way to test stained negatives for contrast and speed at different development times (for example BTZS testing), but it feels like we have too many variables to solve for.
With normal developers there's very little difference between the blue and the green readings and I just use the visible channel to make a density plot for contrast measurements. But with the stain there's about a stop difference on the densest patch (1.31 Green vs 1.57 Blue). I'm looking to see if my thinking about this is correct.
If I plot the green and blue curves I'll end up with two different slopes with the blue being higher contrast than the green. My hypothesis is that when printing at normal grade the effective contrast will be about 1/2 way between the two since the ratio of blue and green light is approximately equal. I'm thinking this will result in a lower contrast contribution from the green and a higher contrast contribution from the blue. It would then go that as the grade is reduced print will be even less contrasty than expected since the low contrast green curve will effect the print more, and the opposite will occur with increasing the grade.
Based on that I expect the print to respond more quickly to contrast changes, but to otherwise print the same. Does that match your experience?
Ultimately I'm wondering if I can design a way to test stained negatives for contrast and speed at different development times (for example BTZS testing), but it feels like we have too many variables to solve for.