Navigation

Development history

This page shows the development history of projects linked to Ansel, sorted by date for the past 4 months, including commits, issues and pull requests. Related items are showed, based on content similarity computed from the language model. It is updated around once every 10 days.

Add tca data for Olympus ZD ED 50mm f/2 Macro

Upload 8496cb

Add vignetting data for Olympus Zuiko Digital ED 50-200mm f/2.8-3.5 SWD + EC-20 2x extender

Upload a4f993

Update BUG-REPORT.yml

Add upload and backtrace

Update BUG-REPORT.yml

Add version and OS

Merge pull request #681 from LebedevRI/uncoalescing

BitVacuumers: delegate byte splitting to the consumer

Update config.yml

Add Chantal to links

Update issue templates

Disable feature requests

CoalescingOutputIterator: silence clang-tidy's performance-move-constructor-init

CoalescingOutputIterator::maybeOutputCoalescedParts(): do advance the iterator

While it doesn't matter for std::*inserter iterators, it is generally a pointer, and that does need to be advanced.

Codecov Report

Attention: Patch coverage is 74.07407% with 14 lines in your changes are missing coverage. Please review.

Project coverage is 60.96%. Comparing base (8a9967c) to head (a8274e1). Report is 3 commits behind head on develop.

Files Patch % Lines
src/librawspeed/adt/PartitioningOutputIterator.h 36.36% 7 Missing :warning:
src/librawspeed/bitstreams/BitVacuumerJPEG.h 0.00% 5 Missing :warning:
...zz/librawspeed/bitstreams/BitVacuumerRoundtrip.cpp 0.00% 1 Missing :warning:
...peed/codes/PrefixCodeEncoder/PrefixCodeEncoder.cpp 0.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #681 +/- ##
===========================================
+ Coverage 60.91% 60.96% +0.05% 
===========================================
 Files 271 273 +2 
 Lines 16366 16406 +40 
 Branches 2077 2077 
===========================================
+ Hits 9969 10002 +33 
- Misses 6268 6276 +8 
+ Partials 129 128 -1 
Flag Coverage Δ
benchmarks 11.88% <24.07%> (+0.03%) :arrow_up:
integration 44.95% <0.00%> (-0.11%) :arrow_down:
linux 57.14% <66.66%> (+0.04%) :arrow_up:
macOS 25.27% <65.38%> (+0.11%) :arrow_up:
rpu_u 44.95% <0.00%> (-0.11%) :arrow_down:
unittests 21.63% <62.96%> (+0.10%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #680 from kmilos/kmilos/unsupported

Add placeholders for known unsupported cameras

Add Pergear 60mm f/2.8 MK2 Macro

Upload c6bddc

  • Canon models using CR3 codec
  • Panasonic models using v8 codec

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (8a9967c) 60.91% compared to head (58031e3) 60.91%.

@@ Coverage Diff @@
## develop #680 +/- ##
========================================
 Coverage 60.91% 60.91% 
========================================
 Files 271 271 
 Lines 16366 16366 
 Branches 2077 2077 
========================================
 Hits 9969 9969 
 Misses 6268 6268 
 Partials 129 129 
Flag Coverage Δ
benchmarks 11.84% <ø> (ø)
integration 45.05% <ø> (ø)
linux 57.10% <ø> (ø)
macOS 25.16% <ø> (ø)
rpu_u 45.05% <ø> (ø)
unittests 21.52% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

(I have not verified that each change here does exactly what it says it does.)

Hm i don't know. I guess this makes sense, and seeds the names for RPU sample request thing.

@kmilos thank you.

I guess this makes sense, and seeds the names for RPU sample request thing.

One of the ideas behind it indeed. Another I had in mind is to still have getCamera() return at least something (e.g. sanitized maker/model names and aliases) for as many known cameras as possible.

Hi there,

I want to help on #367 and want to get some enlightenment to point me out how to test on it. I had build and download the test images from raw.pixls.us, what will be the recommend process of testing?

For example, how can I determine the decompressor is successfully implement? Are there any viewer or way to test the result?

Thanks!

I'm not sure what the specific question is. You know that it is successfully implemented when, after compiling darktable with updated rawspeed submodule, the images load correctly in darktable :)

See also https://github.com/darktable-org/rawspeed/blob/develop/docs/IntegrationTesting.rst. And:

rawspeed/build-Clang17-releaseWithAsserts$ ./src/utilities/rstest/rstest 
usage: ./src/utilities/rstest/rstest
 [-h] print this help
 [-c] for each file: decode, compute hash and store it.
 If hash exists, it does not recompute it, unless option -f is set!
 [-f] if -c is set, then it will final the existing hashes.
 If -c is not set, and the hash does not exist, then just decode,
 but do not write the hash!
 [-d] store decoded image as PPM
 the file[s] to work on.

With no options given, each raw with an accompanying hash will be decoded
 and compared (unless option -f is set!) to the existing hash. A summary of
 all errors/failed hash comparisons will be reported at the end.

Suggested workflow for easy regression testing:
 1. remove all .hash files and build 'trusted' version of this program
 2. run with option '-c' -> creates .hash for all supported files
 3. build new version to test for regressions
 4. run with no option -> checks files with existing .hash
 If the second run shows no errors, you have no regressions,
 otherwise, the diff between hashes is appended to rstest.log

Thanks for the swift reply!

I'm not sure what the specific question is. You know that it is successfully implemented when, after compiling darktable with updated rawspeed submodule, the images load correctly in darktable :)

I would like to know more about the workflow, do you directly working in the darktable rawspeed-submodule and rebuild the darktable after modifying the rawspeed?

Since it will open up a relative heavy GUI, are there any way to prevent this, or there is a lightweight viewer that also depend on rawspeed?

Also, any suggestion on attaching the GDB for debugging/tracing purpose? Is it still need to attach to rebuilt darktable?

Thanks!

I'm not sure what the specific question is. You know that it is successfully implemented when, after compiling darktable with updated rawspeed submodule, the images load correctly in darktable :)

I would like to know more about the workflow, do you directly working in the darktable rawspeed-submodule and rebuild the darktable after modifying the rawspeed?

Generally almost never, but everyones's mileage may differ.

Again, it depends on what you want to achieve. You could just build the rawspeed as a standalone project, and, as noted in the blurb i posted, use $ ./src/utilities/rstest/rstest -d to produce a PPM/PFM of the decoded image.

Again, it depends on what you want to achieve. You could just build the rawspeed as a standalone project, and, as noted in the blurb i posted, use $ ./src/utilities/rstest/rstest -d to produce a PPM/PFM of the decoded image.

Thanks for pointing me out! I tried rstest and stored the PPM, put the PPM back into darktable and adjust the exposure to get the image back! (btw, are there any viewer that is not darktable support adjusting exposure on the fly?)

So if the new decompressor works will, the above workflow should work, right?

Thanks for pointing me out! I tried rstest and stored the PPM, put the PPM back into darktable and adjust the exposure to get the image back!

Usually it's not quite An Image, since usually a demosaicing is needed, which doesn't happen for PPM's, but it's enough to get a rough view, i guess.

(btw, are there any viewer that is not darktable support adjusting exposure on the fly?)

I can't parse the question, sorry.

So if the new decompressor works will, the above workflow should work, right?

I guess so?

@mlouielu thank you for taking a look at that!

I suppose you can open the PGM in GIMP and hit Colors -> Auto -> Stretch Contrast... (or any other tool having similar functionality, like Adjustments -> Auto-Level in paint.NET on Windows etc.)

It'll be still CFA mosaiced and grayscale, but should at least give you an idea if there is a "sensible" image behind it or just garbage.

Or, you could even temporarily just hack the rstest code to write out the PGM as MSB aligned and there's no need for external programs for exposure compensation then.

You can also avoid any code changes and use pamdepth 65535 to shift the PGM up to 16b, then use pnmhisteq/pnmnorm to enhance, and then pamtopng/pnmtopng (or any other output format to view in any sw): https://netpbm.sourceforge.net/doc/directory.html

I'm sure you can do the equivalent w/ ImageMagick/GraphicsMagick as well on one line...

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

darktable-org

/

rawspeed

Public

Notifications

Fork 116

Star 325

Issues: darktable-org/rawspeed

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Sign up for GitHub

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

72 Open

118 Closed

72 Open

118 Closed

Assigned to nobody

Newest

Oldest

Most commented

Least commented

Recently updated

Least recently updated

Best match

Most reactions

👍

👎

😄

🎉

😕

❤️

🚀

👀

Issues list

How to help for developing new decompressor?

679

opened Feb 22, 2024 by mlouielu

9

Probable overscanning of sony ILCE-7M4 APS-C cropped RAWs

520

opened Sep 18, 2023 by koolfy

13

On the use of dithering when decompressing lossless NEF files

518

opened Sep 13, 2023 by mallman

1

Support JPEG XL compression in DNG

516

opened Sep 1, 2023 by kmilos

4

DNG opcode level 2 support

incomplete

lacks-RPU-samples

506

opened Jul 31, 2023 by jcampbell05

8

src/librawspeed/README.md is out of date

491

opened Jun 30, 2023 by mallman

4

Website died

489

opened Jun 24, 2023 by aurelienpierre

Fujifilm X-T5 Raw compressed doesn't work in github version

411

opened Dec 14, 2022 by ghost

5

Olympus white level

392

opened Oct 28, 2022 by paolodepetrillo

5

We don't sample black areas unless asked to scale black/white. Is that what we really want?

390

opened Oct 21, 2022 by LebedevRI

RawImageDataU16::calculateBlackAreas() only looks at a single pixel o_O

389

opened Oct 21, 2022 by LebedevRI

Different white/black point, Canon EOS 30D

377

opened Jul 25, 2022 by PeterWem

Panasonic DC-GH6 raws are version 8 - new decompressor needed

367

opened Jun 1, 2022 by LebedevRI

6

Fujifilm GFX100 (non-lossless) "compressed" raw support

366

opened May 30, 2022 by LebedevRI

Nikon D7000 raw white level too high

359

opened May 11, 2022 by kofa73

1

failure to process certain DNG files (Leica M246)

343

opened Feb 9, 2022 by ubergeek801

7

[2022-12] Revert temporary patches that resurrected support for cameras lacking sample coverage

338

opened Jan 9, 2022 by LebedevRI

Fujifilm X-E4 (non-lossless) "compressed" raw support

332

opened Dec 19, 2021 by vfonov

Investigate Halide dependency

325

opened Dec 7, 2021 by LebedevRI

pointer to guidelines or requirements for tests

320

opened Nov 9, 2021 by homberghp

Kodak DCS 520c, issues with ISO

311

opened Oct 9, 2021 by PeterWem

2

White level for Canon EOS Rebel T3

285

opened Aug 9, 2021 by photopea

3

need canon eos m6 mark ii support

284

opened Jul 25, 2021 by jheidemann

1

Exif data from Silverfast DNG causes rawspeed to fail

283

opened Jul 9, 2021 by 0x10

1

9

DNG GainMap opcode support

267

opened Apr 21, 2021 by paolodepetrillo

4

Previous 1 2 3 Next

Previous Next

ProTip! Exclude everything labeled bug with -label:bug .

You can’t perform that action at this time.

https://github.com/darktable-org/darktable/issues/16297

Using latest ADC 16.2

Crops are not verified, waiting for RPU samples.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 60.91%. Comparing base (8a9967c) to head (785a599). Report is 10 commits behind head on develop.

@@ Coverage Diff @@
## develop #677 +/- ##
========================================
 Coverage 60.91% 60.91% 
========================================
 Files 271 271 
 Lines 16366 16366 
 Branches 2077 2077 
========================================
 Hits 9969 9969 
 Misses 6268 6268 
 Partials 129 129 
Flag Coverage Δ
benchmarks 11.84% <ø> (ø)
integration 45.05% <ø> (ø)
linux 57.10% <ø> (ø)
macOS 25.16% <ø> (ø)
rpu_u 45.05% <ø> (ø)
unittests 21.52% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Checked, still no sample uploads.

Merge pull request #676 from LebedevRI/recoalescing

CoalescingOutputIterator: non-byte parts

Merge pull request #675 from LebedevRI/bitvacuumer-perf

Make some use of CoalescingOutputIterator

BitVacuumerJPEG: add fast-path for the happy path

Comparing bench/librawspeed/bitstreams/BitVacuumerJPEGBenchmark-old to bench/librawspeed/bitstreams/BitVacuumerJPEGBenchmark
Benchmark Time CPU Time Old Time New CPU Old CPU New
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
BM/Stuffed/2097152_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM/Stuffed/2097152_mean +0.0125 +0.0126 6205 6282 6204 6282
BM/Stuffed/2097152_median +0.0125 +0.0127 6204 6282 6203 6282
BM/Stuffed/2097152_stddev +3.9793 +3.9137 2 10 2 10
BM/Stuffed/2097152_cv +3.9180 +3.8527 0 0 0 0
BM>/Stuffed/2097152_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM>/Stuffed/2097152_mean -0.0349 -0.0348 6453 6228 6452 6227
BM>/Stuffed/2097152_median -0.0338 -0.0338 6447 6229 6446 6228
BM>/Stuffed/2097152_stddev -0.8191 -0.8192 41 7 41 7
BM>/Stuffed/2097152_cv -0.8125 -0.8126 0 0 0 0
BM>/Stuffed/2097152_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM>/Stuffed/2097152_mean -0.1021 -0.1020 5819 5225 5819 5225
BM>/Stuffed/2097152_median -0.1015 -0.1015 5816 5225 5815 5225
BM>/Stuffed/2097152_stddev -0.6999 -0.6951 10 3 10 3
BM>/Stuffed/2097152_cv -0.6658 -0.6605 0 0 0 0
BM>/Stuffed/2097152_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM>/Stuffed/2097152_mean -0.0482 -0.0482 5681 5408 5681 5407
BM>/Stuffed/2097152_median -0.0482 -0.0482 5682 5408 5682 5408
BM>/Stuffed/2097152_stddev +0.3751 +0.3802 3 4 3 4
BM>/Stuffed/2097152_cv +0.4448 +0.4500 0 0 0 0
BM/Unstuffed/2097152_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM/Unstuffed/2097152_mean -0.0402 -0.0401 6508 6247 6508 6246
BM/Unstuffed/2097152_median -0.0401 -0.0400 6508 6247 6508 6247
BM/Unstuffed/2097152_stddev +15.3546 +21.8853 1 9 0 9
BM/Unstuffed/2097152_cv +16.0395 +22.8421 0 0 0 0
BM>/Unstuffed/2097152_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM>/Unstuffed/2097152_mean -0.0328 -0.0328 6414 6203 6413 6203
BM>/Unstuffed/2097152_median -0.0317 -0.0318 6409 6206 6409 6205
BM>/Unstuffed/2097152_stddev -0.7829 -0.7804 38 8 38 8
BM>/Unstuffed/2097152_cv -0.7756 -0.7729 0 0 0 0
BM>/Unstuffed/2097152_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM>/Unstuffed/2097152_mean -0.0982 -0.0983 5939 5356 5939 5355
BM>/Unstuffed/2097152_median -0.0984 -0.0985 5939 5355 5939 5354
BM>/Unstuffed/2097152_stddev +0.0997 +0.1317 3 4 3 4
BM>/Unstuffed/2097152_cv +0.2194 +0.2550 0 0 0 0
BM>/Unstuffed/2097152_pvalue 0.0062 0.0062 U Test, Repetitions: 9 vs 9
BM>/Unstuffed/2097152_mean +0.0041 +0.0041 5545 5568 5545 5568
BM>/Unstuffed/2097152_median +0.0044 +0.0045 5544 5569 5543 5568
BM>/Unstuffed/2097152_stddev +5.5353 +5.1423 3 17 3 17
BM>/Unstuffed/2097152_cv +5.5087 +5.1171 0 0 0 0
BM/Unstuffed/2097152_pvalue 0.0062 0.0062 U Test, Repetitions: 9 vs 9
BM/Unstuffed/2097152_mean +0.0048 +0.0048 6243 6273 6242 6272
BM/Unstuffed/2097152_median +0.0061 +0.0061 6235 6273 6234 6272
BM/Unstuffed/2097152_stddev -0.5808 -0.5713 18 7 18 8
BM/Unstuffed/2097152_cv -0.5827 -0.5733 0 0 0 0
BM>/Unstuffed/2097152_pvalue 0.0423 0.0423 U Test, Repetitions: 9 vs 9
BM>/Unstuffed/2097152_mean -0.0028 -0.0028 6017 6000 6016 5999
BM>/Unstuffed/2097152_median -0.0034 -0.0033 6022 6001 6021 6001
BM>/Unstuffed/2097152_stddev -0.7949 -0.7906 22 4 22 5
BM>/Unstuffed/2097152_cv -0.7944 -0.7900 0 0 0 0
BM>/Unstuffed/2097152_pvalue 0.0036 0.0036 U Test, Repetitions: 9 vs 9
BM>/Unstuffed/2097152_mean +0.0005 +0.0006 5109 5112 5108 5111
BM>/Unstuffed/2097152_median +0.0006 +0.0006 5108 5111 5108 5111
BM>/Unstuffed/2097152_stddev -0.3388 -0.2929 2 1 1 1
BM>/Unstuffed/2097152_cv -0.3391 -0.2933 0 0 0 0
BM>/Unstuffed/2097152_pvalue 0.0047 0.0047 U Test, Repetitions: 9 vs 9
BM>/Unstuffed/2097152_mean -0.0023 -0.0022 5055 5043 5054 5043
BM>/Unstuffed/2097152_median -0.0024 -0.0023 5056 5044 5056 5044
BM>/Unstuffed/2097152_stddev +0.4571 +0.4766 6 9 6 9
BM>/Unstuffed/2097152_cv +0.4604 +0.4798 0 0 0 0
OVERALL_GEOMEAN -0.0290 -0.0290 0 0 0 0

CoalescingOutputIterator: disallow type conversion in ctor, use universal reference

Add 7Artisans 35mm f/0.95

Upload f23ad5

Add Olympus Zuiko Digital ED 50-200mm f/2.8-3.5 SWD + EC-14 1.4x extender

upload 2474a3

Add vignetting data for Olympus Zuiko Digital ED 50-200mm f/2.8-3.5 SWD

update 17ee75

Codecov Report

Attention: 8 lines in your changes are missing coverage. Please review.

Comparison is base (6f1a5bf) 60.94% compared to head (ae1a159) 60.86%.

Files Patch % Lines
src/librawspeed/bitstreams/BitVacuumerJPEG.h 0.00% 6 Missing :warning:
src/librawspeed/adt/CoalescingOutputIterator.h 71.42% 2 Missing :warning:
@@ Coverage Diff @@
## develop #675 +/- ##
===========================================
- Coverage 60.94% 60.86% -0.08% 
===========================================
 Files 271 271 
 Lines 16327 16342 +15 
 Branches 2075 2077 +2 
===========================================
- Hits 9950 9947 -3 
- Misses 6249 6268 +19 
+ Partials 128 127 -1 
Flag Coverage Δ
benchmarks 11.82% <75.00%> (-0.02%) :arrow_down:
integration 45.11% <0.00%> (-0.06%) :arrow_down:
linux 57.07% <67.74%> (-0.09%) :arrow_down:
macOS 25.07% <79.31%> (+0.01%) :arrow_up:
rpu_u 45.11% <0.00%> (-0.06%) :arrow_down:
unittests 21.46% <11.76%> (-0.08%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #674 from LebedevRI/CoalescingOutputIterator

Implement CoalescingOutputIterator

Implement CoalescingOutputIterator

$ bench/librawspeed/adt/CoalescingOutputIteratorBenchmark
2024-02-20T02:27:50+03:00
Running bench/librawspeed/adt/CoalescingOutputIteratorBenchmark
Run on (32 X 3397.05 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 1.16, 1.08, 1.12
------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
------------------------------------------------------------------------------------------
BM_Broadcast/16777216 9893 us 9893 us 70 Latency=589.649ps Throughput=1.57945Gi/s
BM_Broadcast/16777216 4948 us 4948 us 141 Latency=294.908ps Throughput=3.15801Gi/s
BM_Broadcast/16777216 3711 us 3710 us 189 Latency=221.16ps Throughput=4.21109Gi/s
BM_Broadcast/16777216 3708 us 3708 us 189 Latency=220.986ps Throughput=4.21439Gi/s
BM_Copy/16777216 8115 us 8114 us 85 Latency=483.656ps Throughput=1.92559Gi/s
BM_Copy/16777216 3533 us 3533 us 198 Latency=210.609ps Throughput=4.42204Gi/s
BM_Copy/16777216 1730 us 1730 us 402 Latency=103.111ps Throughput=9.03223Gi/s
BM_Copy/16777216 1301 us 1301 us 536 Latency=77.538ps Throughput=12.0112Gi/s

Will be rather useful for BitVacuumer perf work.

$ bench/librawspeed/adt/CoalescingOutputIteratorBenchmark
2024-02-20T02:27:50+03:00
Running bench/librawspeed/adt/CoalescingOutputIteratorBenchmark
Run on (32 X 3397.05 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 1.16, 1.08, 1.12
------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
------------------------------------------------------------------------------------------
BM_Broadcast/16777216 9893 us 9893 us 70 Latency=589.649ps Throughput=1.57945Gi/s
BM_Broadcast/16777216 4948 us 4948 us 141 Latency=294.908ps Throughput=3.15801Gi/s
BM_Broadcast/16777216 3711 us 3710 us 189 Latency=221.16ps Throughput=4.21109Gi/s
BM_Broadcast/16777216 3708 us 3708 us 189 Latency=220.986ps Throughput=4.21439Gi/s
BM_Copy/16777216 8115 us 8114 us 85 Latency=483.656ps Throughput=1.92559Gi/s
BM_Copy/16777216 3533 us 3533 us 198 Latency=210.609ps Throughput=4.42204Gi/s
BM_Copy/16777216 1730 us 1730 us 402 Latency=103.111ps Throughput=9.03223Gi/s
BM_Copy/16777216 1301 us 1301 us 536 Latency=77.538ps Throughput=12.0112Gi/s

Codecov Report

Attention: 10 lines in your changes are missing coverage. Please review.

Comparison is base (953b136) 60.68% compared to head (c862016) 60.93%. Report is 2 commits behind head on develop.

Files Patch % Lines
...t/librawspeed/adt/CoalescingOutputIteratorTest.cpp 66.66% 10 Missing :warning:
@@ Coverage Diff @@
## develop #674 +/- ##
===========================================
+ Coverage 60.68% 60.93% +0.25% 
===========================================
 Files 268 271 +3 
 Lines 16193 16326 +133 
 Branches 2062 2075 +13 
===========================================
+ Hits 9826 9949 +123 
- Misses 6239 6249 +10 
 Partials 128 128 
Flag Coverage Δ
benchmarks 11.83% <61.36%> (+0.40%) :arrow_up:
integration 45.17% <0.00%> (-0.36%) :arrow_down:
linux 57.15% <68.42%> (+0.08%) :arrow_up:
macOS 25.05% <90.22%> (+0.57%) :arrow_up:
rpu_u 45.17% <0.00%> (-0.36%) :arrow_down:
unittests 21.53% <42.10%> (+0.17%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

meson: update subprojects

Signed-off-by: Rosen Penev

unitTests: fix sign comparison warnings

Signed-off-by: Rosen Penev

Add vignetting data for Olympus Zuiko Digital ED 50mm f/2.0 Macro

upload 38c228

Merge pull request #673 from kmilos/kmilos/panny_tz95d

Add Panasonic DC-TZ95D alias

In keeping w/ the tradition, the "D" seems to denote just the back LCD panel refresh.

Resolves https://github.com/darktable-org/darktable/issues/16353

https://github.com/darktable-org/darktable/issues/16353 also mentions ZS-80, is that something we should also add?

Ah, we have that as ZS80.

@kmilos thank you!

One could in theory also add TZ97, TZ96D and ZS80D...

Add Olympus Zuiko Digital 70-300mm F4.0-5.6

upload 1d2581

Add Olympus Zuiko Digital Pro ED 35-100mm F2.0

upload c934d7

updated vignetting data

Olympus Zuiko Digital ED 14-35mm F2.0 SWD Olympus Zuiko Digital ED 12-60mm f/2.8-4.0 SWD

Merge pull request #671 from LebedevRI/bitvacuumerjpeg-perf

Implement simple BitVacuumerJPEGBenchmark

Much like with Streamer, 0xFF byte is special, so if we want to know the true perf of BitVacuumerJPEG, we need to feed it the true JPEG-like stream, with 0xFF bytes, occurring at the same frequency as with BitStreamerJPEGBenchmark

Merge pull request #665 from LebedevRI/bitstream-perf

Add very crude BitVacuumer benchmark

BitVacuumer::put(): don't special-case zero bit count

Comparing bench/librawspeed/bitstreams/BitVacuumerBenchmark-old to bench/librawspeed/bitstreams/BitVacuumerBenchmark
Benchmark Time CPU Time Old Time New CPU Old CPU New
--------------------------------------------------------------------------------------------------------------------------------------
BM/4194304_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM/4194304_mean -0.0691 -0.0691 8 8 8 8
BM/4194304_median -0.0712 -0.0712 8 7 8 7
BM/4194304_stddev +10.3823 +10.4597 0 0 0 0
BM/4194304_cv +11.2274 +11.3106 0 0 0 0
BM/4194304_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM/4194304_mean -0.0649 -0.0649 8 7 8 7
BM/4194304_median -0.0648 -0.0648 8 7 8 7
BM/4194304_stddev +1.0849 +1.0516 0 0 0 0
BM/4194304_cv +1.2297 +1.1941 0 0 0 0
BM/4194304_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM/4194304_mean -0.0345 -0.0345 8 8 8 8
BM/4194304_median -0.0325 -0.0325 8 8 8 8
BM/4194304_stddev -0.6149 -0.6124 0 0 0 0
BM/4194304_cv -0.6011 -0.5986 0 0 0 0
BM/4194304_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM/4194304_mean -0.0544 -0.0544 8 7 8 7
BM/4194304_median -0.0543 -0.0543 8 7 8 7
BM/4194304_stddev -0.2789 -0.2807 0 0 0 0
BM/4194304_cv -0.2374 -0.2393 0 0 0 0
BM/4194304_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM/4194304_mean -0.1186 -0.1186 9 8 9 8
BM/4194304_median -0.1153 -0.1153 9 8 9 8
BM/4194304_stddev -0.8102 -0.8112 0 0 0 0
BM/4194304_cv -0.7847 -0.7858 0 0 0 0
OVERALL_GEOMEAN -0.0687 -0.0687 0 0 0 0

BitStreamCacheBase::push(): allow pushing zero bits

When push() is called by BitStreamer, count is a constant integer, so we'll drop the zero check.

Merge pull request #670 from LebedevRI/byte

Make BitStreamer more byte-friendly

BitStreamerJPEG::fillCache(): when there's 0xFF byte, we consume at least 5 bytes

BitStreamerJPEG::fillCache(): when encountering end-of-stream we consume at least 6 bytes

Comparing bench/librawspeed/bitstreams/BitStreamerJPEGBenchmark-old to bench/librawspeed/bitstreams/BitStreamerJPEGBenchmark
Benchmark Time CPU Time Old Time New CPU Old CPU New
--------------------------------------------------------------------------------------------------------------------------------------------
BM/Stuffed/16777216_pvalue 0.1000 0.1000 U Test, Repetitions: 3 vs 3. WARNING: Results unreliable! 9+ repetitions recommended.
BM/Stuffed/16777216_mean -0.1233 -0.1233 9396 8238 9396 8237
BM/Stuffed/16777216_median -0.1240 -0.1240 9403 8237 9402 8237
BM/Stuffed/16777216_stddev -0.7235 -0.7254 14 4 14 4
BM/Stuffed/16777216_cv -0.6846 -0.6868 0 0 0 0
BM/Unstuffed/16777216_pvalue 0.1000 0.1000 U Test, Repetitions: 3 vs 3. WARNING: Results unreliable! 9+ repetitions recommended.
BM/Unstuffed/16777216_mean -0.1439 -0.1439 8665 7419 8665 7419
BM/Unstuffed/16777216_median -0.1446 -0.1446 8673 7419 8673 7419
BM/Unstuffed/16777216_stddev -0.9929 -0.9894 14 0 14 0
BM/Unstuffed/16777216_cv -0.9917 -0.9876 0 0 0 0
BM/Unstuffed/16777216_pvalue 0.1000 0.1000 U Test, Repetitions: 3 vs 3. WARNING: Results unreliable! 9+ repetitions recommended.
BM/Unstuffed/16777216_mean +0.3519 +0.3519 1828 2472 1828 2471
BM/Unstuffed/16777216_median +0.3528 +0.3528 1827 2472 1827 2471
BM/Unstuffed/16777216_stddev -0.9250 -0.9228 2 0 2 0
BM/Unstuffed/16777216_cv -0.9445 -0.9429 0 0 0 0
OVERALL_GEOMEAN +0.0049 +0.0049 0 0 0 0

Revert "Merge remote-tracking branch 'upstream/pr/668' into develop"

While waiting until this was time to do this, i forgot the prep step of splitting the cast into sub-cast steps. Let's do that first.

This reverts commit 446a71d613fb32d94fd1d5253b334f491b9a37db, reversing changes made to 63844eab69644ffc9949c6a9e4b2e0f9fb68a27e.

This reverts commit 8aae6c9a86abeca5cec20c42046ff9a8993b67bd. This reverts commit 70de40e352cb9ac4db3f00c7b2f9e712c3ba3d65.

Merge pull request #669 from LebedevRI/bit

Drop some `` bits

ThreefrDecoder::decodeMetaDataInternal(): casts are not lossless

TiffDecoderFuzzer-ThreefrDecoder: /src/librawspeed/src/librawspeed/decoders/../adt/Casts.h:66: Ttgt rawspeed::lossless_cast(Tsrc) [Ttgt = int, Tsrc = float]: Assertion `impl::is_bitwise_identical(roundTrippedValue, value)' failed.
<...>
#9 0x6314b4 in rawspeed::ThreefrDecoder::decodeMetaDataInternal(rawspeed::CameraMetaData const*) /src/librawspeed/src/librawspeed/decoders/ThreefrDecoder.cpp

Merge remote-tracking branch 'upstream/pr/668' into develop

  • upstream/pr/668: Casts.h: use concepts, not is_*_v ngDecoder::decodeBlackLevels(): cast is not lossless NefDecoder::DecodeNikonSNef(): cast is not lossless DngOpcodes: PolynomialMap: cast is not lossless IiqDecoder::PhaseOneFlatField(): cast is not lossless VC5Decompressor: decompand(): cast is not lossless Spline::calculateCurve(): cast is not lossless lossless_cast(): actually enforce the invariant Finally rename implicit_cast as lossless_cast

ngDecoder::decodeBlackLevels(): cast is not lossless

#6 0x00005555555f3170 in rawspeed::lossless_cast (value=) at /home/lebedevri/rawspeed/src/librawspeed/decoders/../adt/Casts.h:66
 newValue = 
 roundTrippedValue = 
#7 0x00005555555f3170 in rawspeed::DngDecoder::decodeBlackLevels (this=, raw=)
 blackdim = 
 black_entry =

DngOpcodes: PolynomialMap: cast is not lossless

#6 0x0000555555616009 in rawspeed::lossless_cast (value=) at /home/lebedevri/rawspeed/src/librawspeed/common/../adt/Casts.h:66
 newValue = 
 roundTrippedValue =

IiqDecoder::PhaseOneFlatField(): cast is not lossless

#6 0x000055555561cc9d in rawspeed::lossless_cast (value=) at /home/lebedevri/rawspeed/src/librawspeed/decoders/../adt/Casts.h:66
 newValue = 
 roundTrippedValue =

VC5Decompressor: decompand(): cast is not lossless

#6 0x0000555555607eb4 in rawspeed::lossless_cast (value=1.0000463170273877) at /home/lebedevri/rawspeed/src/librawspeed/decompressors/../adt/Casts.h:67
 newValue = 1
 roundTrippedValue = 1

Spline::calculateCurve(): cast is not lossless

[----------] Global test environment tear-down
[==========] 668 tests from 12 test suites ran. (397 ms total)
[ PASSED ] 666 tests.
[ FAILED ] 2 tests, listed below:
[ FAILED ] SplineDeathTest.ClampUshort16Min
[ FAILED ] SplineDeathTest.ClampUshort16Max

Surprisingly few of these casts ended up being non-lossless, but oss-fuzz will likely help with that :)

CMake: document basic type assumptions

Just so i can stop worrying about things that are impossible.

Merge pull request #666 from LebedevRI/six-hundred-sixty-six

Require XCode 15.2 / libc++-16 / clang-15

Is your feature request related to a problem?

No.

Describe the solution you would like

I would like to request a feature that would make it easier to install and update exiv2 on Windows using the winget tool, which is a command-line interface to the Windows Package Manager service, part of the App Installer. You can learn more about winget here,here.

To support winget installation, you would need to create and submit a manifest file to the winget-pkgs repository2, which is the source for the packages available through winget. The manifest file contains the metadata and installer URLs for your software. You can use the winget-create tool3 to help you generate or update the manifest file.

Describe alternatives you have considered

No.

Desktop

  • OS and version: windows

Any packaging/delivery of exiv2 is out of scope for the project.

Please feel to contact the winget project/packagers directly, or contribute the PR there and maintain winget delivery yourself.

Thank you for your understanding and contribution!

Thank you @kmilos It seems someone already submitted a PR. 😊

Merge pull request #667 from LebedevRI/prep

Split prep changes from PR666

oss-fuzz: build our own libc++-16

oss-fuzz comes with LLVM15-ish, but that (aside from macOS, which comes with LLVM16), is the only libc++-based distro, so it looks like we can get away with requiring libc++-16.

It is a pity that we have to support clang-15, but that is much more invasive of a change.

Refs. https://github.com/google/oss-fuzz/issues/9989

See https://github.com/darktable-org/rawspeed/pull/663

LLVM16 migrated into debian stable. macOS 12 will become EOL in summer.

Therefore, for the summer dt release, we can require macOS 13 + XCode 15.2, which means we'll be able to bump required LLVM version up to 15 or even 16.

... but here let's just prepare for the bump. It's best to discover issues before that.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (686394b) 60.41% compared to head (86297c7) 60.41%.

@@ Coverage Diff @@
## develop #667 +/- ##
========================================
 Coverage 60.41% 60.41% 
========================================
 Files 264 264 
 Lines 16093 16093 
 Branches 2052 2052 
========================================
 Hits 9723 9723 
 Misses 6242 6242 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.62% <ø> (ø)
integration 45.80% <ø> (ø)
linux 56.87% <ø> (ø)
macOS 23.99% <ø> (ø)
rpu_u 45.80% <ø> (ø)
unittests 21.48% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Restore localisation of error messages/exceptions

Signed-off-by: Jim Easterbrook (cherry picked from commit a3985001b13132ce4841bf1170626f7ef8f9dae7)

This is an automatic backport of pull request #2924 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (28fa956) 64.57% compared to head (dbb9a80) 64.57%. Report is 2 commits behind head on main.

@@ Coverage Diff @@
## main #2926 +/- ##
=======================================
 Coverage 64.57% 64.57% 
=======================================
 Files 104 104 
 Lines 22196 22196 
 Branches 10882 10882 
=======================================
 Hits 14332 14332 
 Misses 5622 5622 
 Partials 2242 2242 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (d5b6ccd) 64.57% compared to head (13c7fb9) 64.57%.

@@ Coverage Diff @@
## main #2925 +/- ##
=======================================
 Coverage 64.57% 64.57% 
=======================================
 Files 104 104 
 Lines 22196 22196 
 Branches 10882 10882 
=======================================
 Hits 14332 14332 
 Misses 5622 5622 
 Partials 2242 2242 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Bump CMake version

CMake 3.25 is avaliable in debian stable / Fedora 36, but ubuntu 22.04 still has 3.22, and kitware's cmake apt repo for focal only provides 3.24.2

So at the moment, only require 3.22

CMake: add_specialized_fuzzer() needs to be a macro

With CMake 3.21's policies, the list of all fuzzers variable does not get appended when it is called multiple times.

Add vignetting profile for Olympus Zuiko Digital ED 12-60mm f/2.8-4.0 SWD

Upload d676be

Add vignetting profile for Olympus Zuiko Digital ED 14-35mm F2.0 SWD

Upload 07d280

See https://github.com/darktable-org/rawspeed/pull/663

LLVM16 migrated into debian stable. macOS 12 will become EOL in summer.

Therefore, for the summer dt release, we can require macOS 13 + XCode 15.2, which means we'll be able to bump required LLVM version up to 15 or even 16.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (2c9444b) 60.41% compared to head (1520a43) 60.41%.

@@ Coverage Diff @@
## develop #666 +/- ##
========================================
 Coverage 60.41% 60.41% 
========================================
 Files 264 264 
 Lines 16093 16093 
 Branches 2052 2052 
========================================
 Hits 9723 9723 
 Misses 6242 6242 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.62% <ø> (ø)
integration 45.80% <ø> (ø)
linux 56.87% <ø> (ø)
macOS 23.99% <ø> (ø)
rpu_u 45.80% <ø> (ø)
unittests 21.48% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

I would be afraid to merge this one! 😨

And not passing at that... should be telling us something!

:imp: :imp: :imp:

(this is waiting at least on the dt 4.6.1 release)

Ha!

dt 4.6.1 has been released, and oss-fuzz appears happy with #667. Let's merge this into darktable and see what rest of CI says.

Since v0.28.0 libexiv2's error messages have not been localised. This small change adds the _() gettext function to localise the error message before doing the %n substitutions.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (f1632fe) 64.66% compared to head (edda8ed) 64.66%. Report is 1 commits behind head on 0.28.x.

@@ Coverage Diff @@
## 0.28.x #2924 +/- ##
=======================================
 Coverage 64.66% 64.66% 
=======================================
 Files 104 104 
 Lines 22169 22169 
 Branches 10839 10839 
=======================================
 Hits 14335 14335 
 Misses 5590 5590 
 Partials 2244 2244 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Are the messages themselves marked w/ N_()?

Are the messages themselves marked w/ N_()?

Yes. See https://github.com/Exiv2/exiv2/blob/84ce408771859570d2534effaaecb3dbf535b1c4/src/error.cpp#L14

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (686394b) 60.41% compared to head (1dbcfd5) 60.55%. Report is 35 commits behind head on develop.

@@ Coverage Diff @@
## develop #665 +/- ##
===========================================
+ Coverage 60.41% 60.55% +0.13% 
===========================================
 Files 264 265 +1 
 Lines 16093 16148 +55 
 Branches 2052 2056 +4 
===========================================
+ Hits 9723 9778 +55 
 Misses 6242 6242 
 Partials 128 128 
Flag Coverage Δ
benchmarks 11.12% <100.00%> (+0.50%) :arrow_up:
integration 45.65% <0.00%> (-0.16%) :arrow_down:
linux 56.96% <86.95%> (+0.09%) :arrow_up:
macOS 24.26% <96.36%> (+0.26%) :arrow_up:
rpu_u 45.65% <0.00%> (-0.16%) :arrow_down:
unittests 21.40% <0.00%> (-0.08%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #664 from LebedevRI/prefix-code-encoder

Prefix code encoder: rest of the flavors / decoders

Codecov Report

Attention: 39 lines in your changes are missing coverage. Please review.

Comparison is base (190e592) 60.55% compared to head (13deccc) 60.41%.

Files Patch % Lines
...peed/codes/PrefixCodeEncoder/PrefixCodeEncoder.cpp 0.00% 37 Missing :warning:
src/librawspeed/codes/PrefixCodeVectorEncoder.h 0.00% 2 Missing :warning:
@@ Coverage Diff @@
## develop #664 +/- ##
===========================================
- Coverage 60.55% 60.41% -0.14% 
===========================================
 Files 264 264 
 Lines 16057 16093 +36 
 Branches 2052 2052 
===========================================
 Hits 9723 9723 
- Misses 6206 6242 +36 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.62% <0.00%> (-0.03%) :arrow_down:
integration 45.80% <0.00%> (-0.11%) :arrow_down:
linux 56.87% <0.00%> (-0.13%) :arrow_down:
macOS 23.99% <0.00%> (-0.11%) :arrow_down:
rpu_u 45.80% <0.00%> (-0.11%) :arrow_down:
unittests 21.48% <0.00%> (-0.05%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge remote-tracking branch 'upstream/pr/663' into develop

  • upstream/pr/663: compiler-versions.cmake: 2024 edition

Merge remote-tracking branch 'upstream/pr/662' into develop

  • upstream/pr/662: Bit counting is missing in clang14 PrefixCodeEncoder: support DNG 1.0 LJpeg stream flavor PrefixCodeEncoder: support full (difference) encoding

LLVM16 migrated into debian stable. macOS 12 will become EOL in summer.

Therefore, for the summer dt release, we can require macOS 13 + XCode 15.2, which means we'll be able to bump required LLVM version up to 15 or even 16.

@TurboGit @zisoft FYI ^ I'm really looking forward dropping at least the LLVM14 support.

Yes, that's the plan and I have already made some tests with the macos-13 runner. Unfortunately, Xcode 15 segfaults on this runner when compiling src/common/iop_profile.c: 15945. Does not happen with Xcode 14.

Is there something special in iop_profile.c? I have opened this issue in the hope that some compiler experts can have a look.

Yes, that's the plan and I have already made some tests with the macos-13 runner.

Do you think the bump could happen right after the next point release of stable series of dt? (pending https://github.com/darktable-org/darktable/issues/15945 ?)

@zisoft : There is some DT_CLONE_TARGETS on this unit, but that's not the only one. It a a somewhat large file too 66k but nothing huge. So not sure why iop_profile.c crash the compiler here.

@LebedevRI : Seems like LLVM version 16 in Debian is not in stable but in testing at the moment.

@LebedevRI : Seems like LLVM version 16 in Debian is not in stable but in testing at the moment.

Are you sure you are not mixing up LLVM 17 and LLVM 16? LLVM17 is in testing and not stable, but LLVM16 did migrate into stable: https://packages.debian.org/bookworm/clang-16

That's strange...

$ apt policy llvm
llvm:
 Installé : (aucun)
 Candidat : 1:16.0-57
 Table de version :
 1:16.0-57 500
 500 http://ftp.debian.org/debian testing/main amd64 Packages
 1:14.0-55.7~deb12u1 500
 500 http://ftp.debian.org/debian stable/main amd64 Packages

So 14 on stable and 16 on testing.

Likewise for clang package:

$ apt policy clang
[sudo] Mot de passe de pascal : 
clang:
 Installé : (aucun)
 Candidat : 1:16.0-57
 Table de version :
 1:16.0-57 500
 500 http://ftp.debian.org/debian testing/main amd64 Packages
 1:14.0-55.7~deb12u1 500
 500 http://ftp.debian.org/debian stable/main amd64 Packages

Wait... That's because this is only the default install, but one can install more recent versions:

On testing:

$ apt show clang-17
Package: clang-17
Version: 1:17.0.6-5
Priority: optional
Section: devel
Source: llvm-toolchain-17
...

So, indeed no problem to bump CLang version.

I'm not sure what you are looking at, i literally have a LLVM16 @ bookworm CI entry: https://github.com/darktable-org/rawspeed/actions/runs/7895101031/job/21546912094

As i've said, rawspeed will only be able to bump to LLVM15 (https://github.com/google/oss-fuzz/issues/9989 :/), but dt should be able to bump all the way to LLVM16.

Do you think the bump could happen right after the next point release of stable series of dt?

With darktable.org/darktable#15945 fixed we can bump to

{ os: macos-13, xcode: "15.2", deployment: 13.5 }

Awesome! Let's get point release out of the door soon, and do that!

compiler-versions.cmake: 2024 edition

LLVM16 migrated into debian stable. macOS 12 will become EOL in summer.

Therefore, for the summer dt release, we can require macOS 13 + XCode 15.2, which means we'll be able to bump required LLVM version up to 15 or even 16.

Codecov Report

Attention: 30 lines in your changes are missing coverage. Please review.

Comparison is base (d5bba51) 60.61% compared to head (12aa613) 60.55%.

Files Patch % Lines
src/librawspeed/codes/PrefixCodeVectorEncoder.h 0.00% 13 Missing :warning:
src/librawspeed/codes/AbstractPrefixCodeEncoder.h 0.00% 9 Missing :warning:
src/librawspeed/adt/Bit.h 33.33% 8 Missing :warning:
@@ Coverage Diff @@
## develop #662 +/- ##
===========================================
- Coverage 60.61% 60.55% -0.07% 
===========================================
 Files 264 264 
 Lines 16040 16057 +17 
 Branches 2052 2052 
===========================================
 Hits 9723 9723 
- Misses 6189 6206 +17 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.65% <0.00%> (-0.02%) :arrow_down:
integration 45.90% <12.12%> (-0.06%) :arrow_down:
linux 56.99% <12.12%> (-0.07%) :arrow_down:
macOS 24.10% <0.00%> (-0.02%) :arrow_down:
rpu_u 45.90% <12.12%> (-0.06%) :arrow_down:
unittests 21.53% <0.00%> (-0.03%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #2923 from kevinbackhouse/update-security-policy

Update security policy

Merge pull request #661 from LebedevRI/prefix-code-encoder

Prefix code encoder

Changes copied from 0.28.x branch.

Review these changes using an interactive CodeSee Map

Legend

I forgot to include this in #2921.

Review these changes using an interactive CodeSee Map

Legend

Apply @kmilos's suggestions from #2919 to the main branch.

Kinda did it in https://github.com/Exiv2/exiv2/pull/2920 😉

Just keep the release workflow fix?

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (a08de6f) 64.03% compared to head (a33ca3c) 64.03%. Report is 1 commits behind head on main.

@@ Coverage Diff @@
## main #2921 +/- ##
=======================================
 Coverage 64.03% 64.03% 
=======================================
 Files 104 104 
 Lines 22409 22409 
 Branches 10882 10882 
=======================================
 Hits 14350 14350 
 Misses 5830 5830 
 Partials 2229 2229 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (16584be) 64.03% compared to head (285c985) 64.57%.

@@ Coverage Diff @@
## main #2920 +/- ##
==========================================
+ Coverage 64.03% 64.57% +0.53% 
==========================================
 Files 104 104 
 Lines 22409 22196 -213 
 Branches 10882 10882 
==========================================
- Hits 14350 14332 -18 
+ Misses 5830 5622 -208 
- Partials 2229 2242 +13 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

These are changes that were made on the main branch but haven't been backported to 0.28.x yet. The only difference now is that I didn't backport special_noFilesystemAccess because I think that's a new feature on the main branch.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 18 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (981926b) 64.66%. Report is 38 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/quicktimevideo.cpp 88.00% 2 Missing and 1 partial :warning:
src/value.cpp 86.36% 0 Missing and 3 partials :warning:
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
src/bmffimage.cpp 66.66% 0 Missing and 1 partial :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
src/jp2image.cpp 50.00% 1 Missing :warning:
src/jpgimage.cpp 75.00% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## 0.28.x #2919 +/- ##
==========================================
+ Coverage 63.99% 64.66% +0.66% 
==========================================
 Files 103 104 +1 
 Lines 22338 22169 -169 
 Branches 10821 10839 +18 
==========================================
+ Hits 14296 14335 +39 
+ Misses 5818 5590 -228 
- Partials 2224 2244 +20 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Shall we do one last compare after the last few commits to main?

Merge pull request #2915 from kevinbackhouse/fix-GHSA-crmj-qh74-2r36

prevent unbounded recursion in QuickTimeVideo::multipleEntriesDecoder

Merge pull request #2916 from kevinbackhouse/fix-GHSA-g9xm-7538-mq8w

Avoid out-of-bounds read in QuickTimeVideo::NikonTagsDecoder

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (36e3d55) 63.93% compared to head (39555d8) 63.93%.

@@ Coverage Diff @@
## main #2918 +/- ##
=======================================
 Coverage 63.93% 63.93% 
=======================================
 Files 104 104 
 Lines 22400 22400 
 Branches 10877 10877 
=======================================
 Hits 14322 14322 
 Misses 5854 5854 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Why?

Not critical of course - that's more likely what the end users/packagers will use (like we already do in case of macOS, MSYS2, etc...), and might be quicker to download/install from github images/mirrors than pypi...

Not critical of course - that's more likely what the end users/packagers will use (like we already do in case of macOS, MSYS2, etc...), and might be quicker to download/install from github images/mirrors than pypi...

This makes sense to me. I always use apt-get rather than pip whenever possible.

Codecov Report

Attention: 102 lines in your changes are missing coverage. Please review.

Comparison is base (801bd8b) 60.97% compared to head (e4739c1) 60.61%.

Files Patch % Lines
...peed/codes/PrefixCodeEncoder/PrefixCodeEncoder.cpp 0.00% 64 Missing :warning:
src/librawspeed/codes/PrefixCodeVectorEncoder.h 0.00% 24 Missing :warning:
...c/librawspeed/codes/AbstractPrefixCodeTranscoder.h 69.56% 6 Missing and 1 partial :warning:
src/librawspeed/codes/AbstractPrefixCodeEncoder.h 0.00% 6 Missing :warning:
src/librawspeed/codes/AbstractPrefixCodeDecoder.h 0.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #661 +/- ##
===========================================
- Coverage 60.97% 60.61% -0.37% 
===========================================
 Files 260 264 +4 
 Lines 15946 16041 +95 
 Branches 2044 2052 +8 
===========================================
 Hits 9723 9723 
- Misses 6095 6190 +95 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.66% <0.00%> (-0.07%) :arrow_down:
integration 45.96% <14.15%> (-0.31%) :arrow_down:
linux 57.06% <14.15%> (-0.37%) :arrow_down:
macOS 24.11% <0.00%> (-0.11%) :arrow_down:
rpu_u 45.96% <14.15%> (-0.31%) :arrow_down:
unittests 21.55% <0.00%> (-0.13%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #660 from LebedevRI/cmake

CMake: cache more stuff

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (cd1d81e) 60.97% compared to head (34f0b24) 60.97%.

@@ Coverage Diff @@
## develop #660 +/- ##
========================================
 Coverage 60.97% 60.97% 
========================================
 Files 260 260 
 Lines 15946 15946 
 Branches 2044 2044 
========================================
 Hits 9723 9723 
 Misses 6095 6095 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.72% <ø> (ø)
integration 46.26% <ø> (ø)
linux 57.42% <ø> (ø)
macOS 24.21% <ø> (ø)
rpu_u 46.26% <ø> (ø)
unittests 21.68% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Update version numbers for the new release.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (981926b) 64.66% compared to head (f1632fe) 64.66%. Report is 1 commits behind head on 0.28.x.

@@ Coverage Diff @@
## 0.28.x #2917 +/- ##
=======================================
 Coverage 64.66% 64.66% 
=======================================
 Files 104 104 
 Lines 22169 22169 
 Branches 10839 10839 
=======================================
 Hits 14335 14335 
 Misses 5590 5590 
 Partials 2244 2244 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Shall we sync the CI to main before release? @neheb

Also, will need to update https://github.com/Exiv2/exiv2/blob/main/SECURITY.md, on both 0.28.x and main?

@kmilos: Good point about security.md. I'll add that to this PR. What's the issue with the CI that needs to be synced?

What's the issue with the CI that needs to be synced?

The usual: Conan version and actions out of date... Not a must, more of a "nice to have"...

I think we also have less jobs on main and handled the constantly failing FreeBSD one...

I updated SECURITY.md. I'm wondering if we should remove all mention of versions before 0.27 though. In fact, are we even still supporting 0.27 at this point?

In fact, are we even still supporting 0.27 at this point?

Not really, but we have somewhat of a problem because of wchar_t removal in 0.28x so some still have to use it on Windows.

It seems https://github.com/Exiv2/exiv2/actions/runs/7888790070/job/21527100716?pr=2917 FreeBSD VM is not booting properly, hence this leads to timeout. I may try to compile the latest and gretest code on the real FreeBSD 14 machine in order to check that. Later today.

It seems https://github.com/Exiv2/exiv2/actions/runs/7888790070/job/21527100716?pr=2917 FreeBSD VM is not booting properly, hence this leads to timeout. I may try to compile the latest and gretest code on the real FreeBSD 14 machine in order to check that. Later today.

@1div0: I'm hoping to fix that with #2919

I squashed the commits.

This is a rebase of https://github.com/Exiv2/exiv2/commit/a98d76cc6f97474dcebd6164bd0496c68b826783, which fixed https://github.com/Exiv2/exiv2-ghsa-g9xm-7538-mq8w, onto the main branch.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 2 lines in your changes are missing coverage. Please review.

Comparison is base (36e3d55) 63.93% compared to head (79ab2f6) 64.02%.

Files Patch % Lines
src/quicktimevideo.cpp 50.00% 1 Missing and 1 partial :warning:
@@ Coverage Diff @@
## main #2916 +/- ##
==========================================
+ Coverage 63.93% 64.02% +0.08% 
==========================================
 Files 104 104 
 Lines 22400 22402 +2 
 Branches 10877 10879 +2 
==========================================
+ Hits 14322 14343 +21 
+ Misses 5854 5830 -24 
- Partials 2224 2229 +5 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Credit to OSS-Fuzz: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=64151 nul-terminate buf to avoid out-of-bounds read

Regression test for https://github.com/Exiv2/exiv2/security/advisories/GHSA-g9xm-7538-mq8w

This is a rebase of https://github.com/Exiv2/exiv2/commit/355afea485550e8214ac6b449fb210a7efb71365, which fixed https://github.com/Exiv2/exiv2/security/advisories/GHSA-crmj-qh74-2r36, onto the main branch.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 2 lines in your changes are missing coverage. Please review.

Comparison is base (36e3d55) 63.93% compared to head (9d69a71) 63.94%.

Files Patch % Lines
src/bmffimage.cpp 66.66% 0 Missing and 1 partial :warning:
src/quicktimevideo.cpp 95.23% 1 Missing :warning:
@@ Coverage Diff @@
## main #2915 +/- ##
==========================================
+ Coverage 63.93% 63.94% +0.01% 
==========================================
 Files 104 104 
 Lines 22400 22407 +7 
 Branches 10877 10880 +3 
==========================================
+ Hits 14322 14329 +7 
 Misses 5854 5854 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Credit to OSS-Fuzz: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=65541 Add recursion_depth parameter to ensure that the recursion doesn't go too deep.

Regression test for https://github.com/Exiv2/exiv2/security/advisories/GHSA-crmj-qh74-2r36

Add NIKKOR Z 28mm f/2.8

Upload eafb3e

Merge pull request #658 from LebedevRI/footprint

CMake: improve disk footprint of build directory

CMake: when debug info is enabled, also enable split dwarf

This results in addition -28% footprint reduction of the Release build directory.

CMake: when debug info is enabled, also enable it's compression

This results in -48% less footprint of the Release build directory...

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (b737e89) 60.97% compared to head (8850e3c) 60.97%.

@@ Coverage Diff @@
## develop #658 +/- ##
========================================
 Coverage 60.97% 60.97% 
========================================
 Files 260 260 
 Lines 15946 15946 
 Branches 2044 2044 
========================================
 Hits 9723 9723 
 Misses 6095 6095 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.72% <ø> (ø)
integration 46.26% <ø> (ø)
linux 57.42% <ø> (ø)
macOS 24.21% <ø> (ø)
rpu_u 46.26% <ø> (ø)
unittests 21.68% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

CMake: stop telling gtest to dump output xml's

IIRC those were (going to be?) used in SonarCloud, to get the test coverage, but that never worked.

Merge pull request #657 from LebedevRI/coverage

CI: enable sample-based testing for all linux-fast jobs, not just Coverage

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (10320be) 60.97% compared to head (300c73d) 60.97%.

@@ Coverage Diff @@
## develop #657 +/- ##
========================================
 Coverage 60.97% 60.97% 
========================================
 Files 260 260 
 Lines 15946 15946 
 Branches 2044 2044 
========================================
 Hits 9723 9723 
 Misses 6095 6095 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.72% <ø> (ø)
integration 46.26% <ø> (ø)
linux 57.42% <ø> (ø)
macOS 24.21% <ø> (ø)
rpu_u 46.26% <ø> (ø)
unittests 21.68% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

BitStreamerJPEG: add name to friend function's argument

dt's Fedora 38 PPC64LE build (and only that one?!) is failing with a very weird

[ 337s] In file included from /home/abuild/rpmbuild/BUILD/darktable-4.7.0~git534.af89bfbf/src/external/rawspeed/src/librawspeed/decoders/../decompressors/SonyArw1Decompressor.h:25,
[ 337s] from /home/abuild/rpmbuild/BUILD/darktable-4.7.0~git534.af89bfbf/src/external/rawspeed/src/librawspeed/decoders/ArwDecoder.cpp:35:
[ 337s] /home/abuild/rpmbuild/BUILD/darktable-4.7.0~git534.af89bfbf/src/external/rawspeed/src/librawspeed/decoders/../io/BitStreamerMSB.h:47:29: error: default argument specified in explicit specialization [-fpermissive]
[ 337s] 47 | friend void Base::fill(int); // Allow it to call our `fillCache()`.
[ 337s] | ^
[ 338s] gmake[2]: *** [lib64/darktable/rawspeed/src/librawspeed/decoders/CMakeFiles/rawspeed_decoders.dir/build.make:93:

... which makes zero sense since we clearly don't have a default argument here. Maybe giving it a name helps? Probably not.

Merge pull request #656 from LebedevRI/ctu

CI: split off CTU static analysis into a separate job

Merge pull request #655 from LebedevRI/cmake-build-type

CMake: RelWithDebInfo is pointless, replace it with ReleaseWithAsserts

CMake: RelWithDebInfo is pointless, replace it with ReleaseWithAsserts

We always enable debug info.

And actually i though assertions were enabled in RelWithDebInfo but they are not.

So ReleaseWithAsserts is more interesting.

We always enable debug info.

And actually i though assertions were enabled in RelWithDebInfo but they are not.

So ReleaseWithAsserts is more interesting.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (d9d14dc) 60.97% compared to head (ff3e87a) 60.97%.

@@ Coverage Diff @@
## develop #655 +/- ##
========================================
 Coverage 60.97% 60.97% 
========================================
 Files 260 260 
 Lines 15946 15946 
 Branches 2044 2044 
========================================
 Hits 9723 9723 
 Misses 6095 6095 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.72% <ø> (ø)
integration 46.26% <ø> (ø)
linux 57.42% <ø> (ø)
macOS 24.21% <ø> (ø)
rpu_u 46.26% <ø> (ø)
unittests 21.68% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

I've got two low severity security bugs that I'd like to release fixes for. Is everyone happy for me to release v0.28.2? I'll take care of the related tasks like updating the website.

Output of git shortlog -s -n v0.28.1..0.28.x:

 16 Miloš Komarčević
 7 Jim Easterbrook
 3 Miguel Borges de Freitas
 2 Pino Toscano
 2 Rosen Penev

That's: @kmilos @jim-easterbrook @enen92 @pinotree @neheb

Code changes in this release are listed in the milestone: https://github.com/Exiv2/exiv2/milestone/13

Links to the two security issues (currently still private): * CVE-2024-24826 * CVE-2024-25112

Nice. I believe that those regex purges may make some people happy :)

I believe that those regex purges may make some people happy :)

@kamiccolo I think there is still some way to go on those: https://gitlab.gnome.org/GNOME/gimp/-/merge_requests/1102#note_2011736

I believe that those regex purges may make some people happy :)

@kamiccolo I think there is still some way to go on those: https://gitlab.gnome.org/GNOME/gimp/-/merge_requests/1102#note_2011736

Oh yeah, there were some left-overs last time I've checked.

  • canonmn_int.cpp: https://github.com/Exiv2/exiv2/blob/main/src/canonmn_int.cpp#L2888-L2915
  • value.cpp: https://github.com/Exiv2/exiv2/blob/main/src/value.cpp#L898-L924
  • version.cpp: https://github.com/Exiv2/exiv2/blob/main/src/version.cpp#L88-L97
  • app/exiv2.cpp: https://github.com/Exiv2/exiv2/blob/main/app/exiv2.cpp#L504-L515
  • app/actions.cpp: https://github.com/Exiv2/exiv2/blob/main/app/actions.cpp#L422-L431

(at least those)

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 19

Star 574

Issues: aurelienpierreeng/ansel

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Sign up for GitHub

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

68 Open

145 Closed

68 Open

145 Closed

Assigned to nobody

Newest

Oldest

Most commented

Least commented

Recently updated

Least recently updated

Best match

Most reactions

👍

👎

😄

🎉

😕

❤️

🚀

👀

Issues list

Selective History Stack Copy

327

opened Feb 11, 2024 by CavemanChris

1

Crash while using liquefy

326

opened Feb 10, 2024 by Giorey01

Visual mapping button missing?

322

opened Feb 3, 2024 by Sharparam

1

Crash on import-Copy to disk

320

opened Jan 28, 2024 by elidigui

4

Cannot apply any change to a specific image in the darkroom

318

opened Jan 20, 2024 by marco44

1

Mouse pan/zoom leaks GPU memory in Darkroom

317

opened Jan 15, 2024 by tajuma

1

Feature request: add export of a list of selected file names

difficulty: easy

enhancement

316

opened Jan 15, 2024 by blonchkman

The export result is considerably darker than the central view in the darkroom

313

opened Jan 5, 2024 by blonchkman

2

masks: hiding the mask view randomly causes a black or disturbed central view.

312

opened Jan 4, 2024 by blonchkman

2

Retouch module does not show/apply correction

310

opened Jan 3, 2024 by antoniopiazza

4

Horizon and perspective breaks when crop is applied prior

308

opened Jan 3, 2024 by trougnouf

1

Image disorted when changing TCA overwrite. Fixes when zooming.

bug

306

opened Jan 1, 2024 by pjhyor7

0.1

Zooming messes with pipeline (preview in Darkroom not stable)

bug

296

opened Dec 24, 2023 by LucaZulberti

0.1

4

UI Scopes module

redesign

Needs to rewrite existing features

293

opened Dec 22, 2023 by audiomartin

0.1

3

Import adds an invalid empty folder

regression

291

opened Dec 22, 2023 by vtyrtov

0.1

1

No copy on windows

regression

289

opened Dec 21, 2023 by wadouk

0.1

3

Make pasting development recompute pipe caches in darkroom too

enhancement

priority: critical

Affects basic and core functionnalities of the software in a way that prevents it to work at all

284

opened Dec 18, 2023 by Jiyone

2

Framing module: increasing border size past a certin point does not preserve frame's aspect ratio.

unclear

277

opened Dec 16, 2023 by pedrorrodriguez

1

Unable to clear history

priority: critical

Affects basic and core functionnalities of the software in a way that prevents it to work at all

regression

269

opened Dec 13, 2023 by kred

5

Pickers given priority when snapshot function is active.

enhancement

264

opened Dec 10, 2023 by pjhyor7

1

Ansel freezes by applying style from the darkroom

bug

priority: high

Affects basic and core functionnalities of the software in a way that severly degrades usability

262

opened Dec 8, 2023 by lologor

Thoughts on the new theme

enhancement

259

opened Dec 8, 2023 by pedrorrodriguez

14

Improve color picker viewport display

enhancement

257

opened Dec 7, 2023 by pedrorrodriguez

Display metadata placement

enhancement

256

opened Dec 7, 2023 by pedrorrodriguez

5

Zoom levels should not be on the navigation module

enhancement

254

opened Dec 7, 2023 by pedrorrodriguez

3

Previous 1 2 3 Next

Previous Next

ProTip! Type g i on any issue or pull request to go back to the issue listing page.

You can’t perform that action at this time.

I googled it, and used the chantel AI and got a little confused. Either I'm doing something stupid or there's no "history stack selective copy" feature in Ansel if there is, it's not as easy to work with like darktable, with a separate tab, i can't find a way to do it. If there isn't, I think it's a good idea to implement.

I like your minimalist approach and saw the video about the coffee maker and I agree, the old stuff is made solid. But sometimes minimalism takes away some cool features. ie easter egg game. lol. didn't know that was on darktable till I saw the video.

Also this might not be the appropriate place but considering your software is potentially superior to photoshop in that it's scene referred based, how would it stack up against other software like Silverfast HDR studio. a youtube video about it would be awesome. or a quick message. Instagram @chris__ratke email candidchris@protonmail.com

... Either I'm doing something stupid or there's no "history stack selective copy" feature in Ansel...

These commands have been moved to Ansel's global menu. Edit --> copy development : copy the current state and params of all modules involved up to the currently selected history step (there is no choice here). Edit --> Past development (all) : past all modules. Edit --> Past development (parts) : opens a dialog for selecting.

Hope that might help

Crash while using liquefy on JPG

System

  • OS : Win11
  • Graphics card msi gtx 1660 :
  • Graphics driver Nvidia 551.23 :

here's the crash report

this is ansel 0.0.0+729~ge2c4a0a reporting an exception:


Error occurred on Saturday, February 10, 2024 at 10:21:39.

ansel.exe caused an Access Violation at location 00007FF9D81A1ECF in module libgtk-3-0.dll Reading from location FFFFFFFFFFFFFFFF.

AddrPC Params 00007FF9D81A1ECF 0000000000000000 0000000000000002 00000236045B7A00 libgtk-3-0.dll!gtk_widget_queue_draw+0x2f 00007FF9D8A330D9 0000000000000000 0000000000000000 0000000000000000 libansel.dll!dt_conf_cleanup+0x189 00007FFA1628EB5C 0000000000000002 00000236045B7A00 0000000000000000 libglib-2.0-0.dll!g_clear_list+0x11ac 00007FFA16291983 0000000000000000 00007FFA162B3E26 0000000000000000 libglib-2.0-0.dll!g_get_monotonic_time+0xa73 00007FFA16292320 0000000000000000 0000000000000000 00007FF9D8E230C0 libglib-2.0-0.dll!g_main_loop_run+0x120 00007FF9D803BB5E 0000000000000000 00007FF66859406A 00007FF668594028 libgtk-3-0.dll!gtk_main+0x7e 00007FF9D8B2B46F 0002000000000001 0000023602417FD0 0000000000000001 libansel.dll!dt_gui_gtk_run+0xaf 00007FF668592D09 00007FF668591560 00007FF66859277D 00007FFA49B80F28 ansel.exe!0x2d09 00007FF6685914C2 0000000000000000 00007FF668597048 0000000000000000 ansel.exe!0x14c2 00007FF6685912F7 0000000000000000 0000000000000000 0000000000000000 ansel.exe!0x12f7 00007FF668591406 0000000000000000 0000000000000000 0000000000000000 ansel.exe!0x1406 00007FFA4B8C257D 0000000000000000 0000000000000000 0000000000000000 KERNEL32.DLL!BaseThreadInitThunk+0x1d 00007FFA4C34AA58 0000000000000000 0000000000000000 0000000000000000 ntdll.dll!RtlUserThreadStart+0x28

00007FF668590000-00007FF6685AE000 ansel.exe 0.0.0.729 00007FFA4C2F0000-00007FFA4C507000 ntdll.dll 10.0.22621.2506 00007FFA4B8B0000-00007FFA4B974000 KERNEL32.DLL 10.0.22621.2506 00007FFA496A0000-00007FFA49A46000 KERNELBASE.dll 10.0.22621.2792 00007FFA464B0000-00007FFA46547000 apphelp.dll 10.0.22621.2506 00007FFA49A80000-00007FFA49B91000 ucrtbase.dll 10.0.22621.2506 00007FFA35E90000-00007FFA35EC5000 libintl-8.dll 0.22.4.0 00007FFA4B990000-00007FFA4BA43000 ADVAPI32.dll 10.0.22621.3007 00007FFA4BA50000-00007FFA4BAF7000 msvcrt.dll 7.0.22621.2506 00007FFA4A140000-00007FFA4A1E8000 sechost.dll 10.0.22621.3007 00007FFA49A50000-00007FFA49A78000 bcrypt.dll 10.0.22621.2506 00007FFA4B210000-00007FFA4B327000 RPCRT4.dll 10.0.22621.2792 00007FFA16250000-00007FFA163B6000 libglib-2.0-0.dll 2.78.3.0 00007FFA4AFA0000-00007FFA4B140000 ole32.dll 10.0.22621.2506 00007FFA49C60000-00007FFA49CFA000 msvcp_win.dll 10.0.22621.2506 00007FFA4BC50000-00007FFA4BC79000 GDI32.dll 10.0.22621.2792 00007FFA49F90000-00007FFA49FB6000 win32u.dll 10.0.22621.3007 00007FFA49D00000-00007FFA49E18000 gdi32full.dll 10.0.22621.2861 00007FFA4BFB0000-00007FFA4C15E000 USER32.dll 10.0.22621.2506 00007FFA4A320000-00007FFA4A6A9000 combase.dll 10.0.22621.2792 00007FFA4A6D0000-00007FFA4AF2A000 SHELL32.dll 10.0.22621.3007 00007FFA4BF10000-00007FFA4BF81000 WS2_32.dll 10.0.22621.1 00007FF9E7070000-00007FF9E7189000 libiconv-2.dll 1.17.0.0 00007FFA161E0000-00007FFA16248000 libpcre2-8-0.dll 00007FF9D8930000-00007FF9D8ECA000 libansel.dll 00007FFA345C0000-00007FFA345E4000 libgcc_s_seh-1.dll 00007FF9FD2C0000-00007FF9FD316000 exchndl.dll 0.9.11.0 00007FF9E6AA0000-00007FF9E6BCC000 libcairo-2.dll 00007FFA261C0000-00007FFA261F1000 libgdk_pixbuf-2.0-0.dll 2.42.10.0 00007FF9D87E0000-00007FF9D8928000 libgdk-3-0.dll 3.24.39.0 00007FFA4A2E0000-00007FFA4A311000 IMM32.dll 10.0.22621.2792 00007FFA4B330000-00007FFA4B7A4000 SETUPAPI.dll 10.0.22621.2506 00007FF9D8620000-00007FF9D87D9000 libgio-2.0-0.dll 2.78.3.0 00007FFA4C1F0000-00007FFA4C24E000 SHLWAPI.dll 10.0.22621.2506 00007FF9FD260000-00007FF9FD2BA000 libgobject-2.0-0.dll 2.78.3.0 00007FF9FCF00000-00007FF9FCF65000 libpango-1.0-0.dll 1.50.14.0 00007FFA3CCB0000-00007FFA3CCC8000 libpangocairo-1.0-0.dll 1.50.14.0 00007FF9D7EC0000-00007FF9D861F000 libgtk-3-0.dll 3.24.39.0 00007FFA3C1B0000-00007FFA3C1C7000 libwinpthread-1.dll 1.0.0.0 00007FFA4A040000-00007FFA4A139000 comdlg32.dll 10.0.22621.2506 00007FFA4B7B0000-00007FFA4B8A3000 shcore.dll 10.0.22621.2715 00007FFA3B6D0000-00007FFA3B6EF000 zlib1.dll 00007FFA25DE0000-00007FFA25E16000 libavif-16.dll 00007FF9E61C0000-00007FF9E628C000 libcurl-4.dll 8.5.0.0 00007FFA49E20000-00007FFA49F86000 CRYPT32.dll 10.0.22621.2506 00007FFA4AF30000-00007FFA4AF92000 WLDAP32.dll 10.0.22621.2506 00007FF9D7B90000-00007FF9D7EB7000 libexiv2.dll 00007FFA4A6B0000-00007FFA4A6B8000 PSAPI.DLL 10.0.22621.1 00007FF9D7960000-00007FF9D7B90000 libstdc++-6.dll 00007FFA3CB50000-00007FFA3CB60000 libgmodule-2.0-0.dll 2.78.3.0 00007FFA25780000-00007FFA257CF000 libgomp-1.dll 00007FF9D7820000-00007FF9D795C000 libGraphicsMagick-3.dll 00007FF9D7720000-00007FF9D7811000 libheif.dll 00007FF9D7620000-00007FF9D771C000 libjpeg-8.dll 00007FF9D7450000-00007FF9D7616000 libicuuc74.dll 00007FFA33480000-00007FFA334AE000 libjson-glib-1.0-0.dll 00007FF9E8220000-00007FF9E828F000 liblcms2-2.dll 00007FF9D7180000-00007FF9D744E000 libicuin74.dll 00007FF9D6E70000-00007FF9D717A000 libOpenEXR-3_2.dll 00007FF9E6EE0000-00007FF9E6F57000 libopenjp2-7.dll 00007FFA18C20000-00007FFA18C63000 libpng16-16.dll 00007FF9FCEB0000-00007FF9FCEF3000 libpugixml.dll 1.14.0.0 00007FF9DB660000-00007FF9DB6B9000 libsecret-1-0.dll 00007FF9DA6C0000-00007FF9DA755000 libsoup-2.4-1.dll 00007FF9D6DD0000-00007FF9D6E63000 libtiff-6.dll 00007FF9D6C40000-00007FF9D6DC2000 libsqlite3-0.dll 00007FF9D63A0000-00007FF9D6C3E000 librsvg-2-2.dll 00007FFA36270000-00007FFA36284000 libwebpmux-3.dll 00007FFA3FA60000-00007FFA3FA6A000 VERSION.dll 10.0.22621.1 00007FF9D62E0000-00007FF9D639F000 libwebp-7.dll 00007FFA3AD10000-00007FFA3AD17000 MSIMG32.dll 10.0.22621.2506 00007FF9DB600000-00007FF9DB657000 libfontconfig-1.dll 00007FF9D6210000-00007FF9D62D1000 mgwhelp.dll 0.9.11.0 00007FF9D60C0000-00007FF9D6203000 libxml2-2.dll 00007FFA33590000-00007FFA33749000 gdiplus.dll 10.0.22621.2506 00007FFA46AB0000-00007FFA46ADB000 dwmapi.dll 10.0.22621.2506 00007FFA47E30000-00007FFA47E3E000 HID.DLL 10.0.22621.1 00007FFA3A120000-00007FFA3A154000 WINMM.dll 10.0.22621.2506 00007FF9D5FF0000-00007FF9D60B4000 libfreetype-6.dll 2.13.2.0 00007FFA34FE0000-00007FFA34FF1000 libcairo-gobject-2.dll 00007FF9D5F40000-00007FF9D5FEF000 libpixman-1-0.dll 00007FFA28820000-00007FFA2884C000 libfribidi-0.dll 00007FFA480F0000-00007FFA481E9000 DNSAPI.dll 10.0.22621.2506 00007FFA480C0000-00007FFA480ED000 IPHLPAPI.DLL 10.0.22621.1 00007FFA26E90000-00007FFA26EAF000 libpangowin32-1.0-0.dll 1.50.14.0 00007FFA22160000-00007FFA22171000 libffi-8.dll 00007FF9D5D90000-00007FF9D5F3F000 libepoxy-0.dll 00007FF9D5C50000-00007FF9D5D8C000 libharfbuzz-0.dll 00007FFA33750000-00007FFA339E3000 COMCTL32.dll 6.10.22621.2506 00007FFA33A60000-00007FFA33B08000 WINSPOOL.DRV 10.0.22621.2506 00007FFA1DCD0000-00007FFA1DCE8000 libthai-0.dll 00007FFA1C5D0000-00007FFA1C5FE000 libatk-1.0-0.dll 2.50.0.0 00007FFA161C0000-00007FFA161DC000 libpangoft2-1.0-0.dll 1.50.14.0 00007FF9D5360000-00007FF9D5C44000 libaom.dll 00007FF9D5160000-00007FF9D5356000 libdav1d-7.dll 7.0.0.0 00007FF9E8310000-00007FF9E8326000 libsharpyuv-0.dll 00007FF9D4DF0000-00007FF9D5155000 rav1e.dll 00007FF9D4600000-00007FF9D4DE3000 libSvtAv1Enc.dll 00007FF9E8200000-00007FF9E8217000 libbrotlidec.dll 00007FF9D44F0000-00007FF9D45F1000 libyuv.dll 00007FF9E7B10000-00007FF9E7B47000 libnghttp2-14.dll 1.58.0.0 00007FF9D44A0000-00007FF9D44E2000 libidn2-0.dll 00007FF9E76B0000-00007FF9E76D2000 libpsl-5.dll 00007FF9D4450000-00007FF9D449E000 libssh2-1.dll 1.11.0.0 00007FF9D3F70000-00007FF9D4441000 libcrypto-3-x64.dll 3.2.0.0 00007FF9D3E80000-00007FF9D3F64000 libssl-3-x64.dll 3.2.0.0 00007FF9E6180000-00007FF9E61B4000 libexpat-1.dll 00007FF9D3D60000-00007FF9D3E7E000 libzstd.dll 00007FF9E6EC0000-00007FF9E6EDA000 libbz2-1.dll 00007FF9DB5E0000-00007FF9DB5F8000 libltdl-7.dll 00007FF9D3CA0000-00007FF9D3D5A000 libde265-0.dll 00007FF9D1F30000-00007FF9D3C99000 libicudt74.dll 00007FF9D1ED0000-00007FF9D1F28000 libImath-3_1.dll 00007FF9DB570000-00007FF9DB582000 libIlmThread-3_2.dll 00007FF9D1E70000-00007FF9D1ED0000 libIex-3_2.dll 00007FF9D1C30000-00007FF9D1E6E000 libOpenEXRCore-3_2.dll 00007FF9D1AE0000-00007FF9D1C26000 libgcrypt-20.dll 1.10.3.0 00007FF9DA6A0000-00007FF9DA6B7000 libjbig-0.dll 00007FF9D1AC0000-00007FF9D1ADC000 libdeflate.dll 00007FF9D19F0000-00007FF9D1AB1000 libLerc.dll 00007FFA48C20000-00007FFA48C4C000 USERENV.dll 10.0.22621.2506 00007FF9D19B0000-00007FF9D19E5000 liblzma-5.dll 5.4.5.0 00007FFA46FD0000-00007FFA47203000 dbghelp.dll 10.0.22621.2506 00007FFA4A200000-00007FFA4A2D7000 OLEAUT32.dll 10.0.22621.2506 00007FFA448B0000-00007FFA44B23000 DWrite.dll 10.0.22621.2506 00007FFA28020000-00007FFA28039000 USP10.dll 10.0.22621.1 00007FF9D0650000-00007FF9D19A2000 libx265.dll 3.4.0.31 00007FFA3C7C0000-00007FFA3C7D0000 libdatrie-1.dll 00007FF9D0620000-00007FF9D064C000 libgraphite2.dll 00007FF9D05F0000-00007FF9D061D000 libbrotlicommon.dll 00007FF9D0400000-00007FF9D05ED000 libunistring-5.dll 1.1.0.0 00007FFA48EB0000-00007FFA48EBC000 CRYPTBASE.DLL 10.0.22621.1 00007FFA3B240000-00007FFA3B272000 dbgcore.DLL 10.0.22621.1 00007FF9D03C0000-00007FF9D03FC000 libgpg-error-0.dll 1.47.0.0 00007FFA49FC0000-00007FFA4A03A000 bcryptPrimitives.dll 10.0.22621.2506 00007FFA4B980000-00007FFA4B989000 NSI.dll 10.0.22621.1 00007FFA48710000-00007FFA48728000 kernel.appcore.dll 10.0.22621.2715 00007FFA47530000-00007FFA47E26000 windows.storage.dll 10.0.22621.2792 00007FFA473F0000-00007FFA4752E000 wintypes.dll 10.0.22621.2792 00007FFA42970000-00007FFA42AA7000 winhttp.dll 10.0.22621.2506 00007FFA26FA0000-00007FFA26FA6000 KBDUSX.DLL 10.0.22621.1 00007FFA465B0000-00007FFA4665B000 uxtheme.dll 10.0.22621.3007 00007FFA4BB00000-00007FFA4BC50000 MSCTF.dll 10.0.22621.2792 00007FFA492F0000-00007FFA4931C000 DEVOBJ.dll 10.0.22621.2506 00007FFA49320000-00007FFA4936E000 cfgmgr32.dll 10.0.22621.2506 00007FFA49630000-00007FFA4969B000 WINTRUST.dll 10.0.22621.3007 00007FFA492D0000-00007FFA492E2000 MSASN1.dll 10.0.22621.2506 00007FFA4BE40000-00007FFA4BEF0000 clbcatq.dll 2001.12.10941.16384 00007FFA18C70000-00007FFA18DDC000 OpenCL.dll 3.0.3.0 00007FFA46780000-00007FFA467B6000 dxcore.dll 10.0.22621.2506 00007FFA42D40000-00007FFA42E75000 AppXDeploymentClient.dll 10.0.22621.2792 00007FF9CF900000-00007FF9D03BC000 nvopencl64.dll 31.0.15.5123 00007FFA3FA20000-00007FFA3FA52000 cryptnet.dll 10.0.22621.1 00007FFA3F4A0000-00007FFA3F5F8000 drvstore.dll 10.0.22621.2506 00007FFA48E00000-00007FFA48E4B000 wldp.dll 10.0.22621.2792 00007FFA39180000-00007FFA39844000 nvapi64.dll 31.0.15.5123 00007FF9CE5E0000-00007FF9CF8FF000 nvptxJitCompiler64.dll 31.0.15.5123 00007FFA35ED0000-00007FFA35FCC000 Windows.ApplicationModel.dll 10.0.22621.2506 00007FFA36C20000-00007FFA36D0B000 Windows.StateRepositoryPS.dll 10.0.22621.2792 00007FFA44040000-00007FFA44063000 Windows.StateRepositoryBroker.dll 10.0.22621.2792 00007FFA34400000-00007FFA34518000 mrmcorer.dll 10.0.22621.2506 00007FFA33D60000-00007FFA3401C000 iertutil.dll 11.0.22621.3007 00007FFA41C60000-00007FFA41C7A000 windows.staterepositorycore.dll 10.0.22621.2792 00007FFA49560000-00007FFA49586000 profapi.dll 10.0.22621.2506 00007FFA2B370000-00007FFA2B4E4000 Windows.UI.dll 10.0.22621.2506 00007FFA2BA10000-00007FFA2BA42000 bcp47mrm.dll 10.0.22621.2506 00007FFA45800000-00007FFA45901000 propsys.dll 7.0.22621.2506 00007FFA21A50000-00007FFA21A6A000 NetworkExplorer.dll 10.0.22621.1 00007FFA0F680000-00007FFA0F6A6000 mssprxy.dll 7.0.22621.2792 00007FFA2B320000-00007FFA2B364000 languageoverlayutil.dll 10.0.22621.2506 00007FF9CA4A0000-00007FF9CE5D5000 nvvm64.dll 31.0.15.5123 00007FFA48ED0000-00007FFA48EEB000 CRYPTSP.dll 10.0.22621.2506 00007FFA48670000-00007FFA486A5000 rsaenh.dll 10.0.22621.1 00007FFA40BC0000-00007FFA40BFD000 windows.staterepositoryclient.dll 10.0.22621.2792 00007FFA36300000-00007FFA36309000 IconCodecService.dll 10.0.22621.1 00007FFA44640000-00007FFA447F0000 WindowsCodecs.dll 10.0.22621.2506 00007FFA43350000-00007FFA43360000 libpixbufloader-png.dll 00007FFA3DBF0000-00007FFA3DCAC000 mscms.dll 10.0.22621.2506 00007FF9FCB20000-00007FF9FCB69000 icm32.dll 10.0.22621.2506 00007FFA3CB60000-00007FFA3CCAA000 textinputframework.dll 10.0.22621.2792 00007FFA45EB0000-00007FFA45FE4000 CoreMessaging.dll 10.0.22621.3007 00007FFA40E60000-00007FFA411CC000 CoreUIComponents.dll 10.0.22621.2506 00007FFA3D0C0000-00007FFA3D345000 msxml6.dll 6.30.22621.2506 00007FFA3B820000-00007FFA3B842000 libdarkroom.dll 00007FFA3CCD0000-00007FFA3CCDF000 liblighttable.dll 00007FFA3B810000-00007FFA3B81F000 libavif.dll 00007FFA3B800000-00007FFA3B80D000 libcopy.dll 00007FFA3B7E0000-00007FFA3B7F6000 libexr.dll 00007FFA3B7D0000-00007FFA3B7DE000 libj2k.dll 00007FFA3B7C0000-00007FFA3B7CE000 libjpeg.dll 00007FFA3B7B0000-00007FFA3B7C0000 libpdf.dll 00007FFA3B7A0000-00007FFA3B7AD000 libpfm.dll 00007FFA3B790000-00007FFA3B79F000 libpng.dll 00007FFA3B780000-00007FFA3B78D000 libppm.dll 00007FFA3B770000-00007FFA3B77F000 libtiff.dll 00007FFA3B760000-00007FFA3B76F000 libwebp.dll 00007FFA3B740000-00007FFA3B753000 libxcf.dll 00007FFA3B730000-00007FFA3B73F000 libdisk.dll 00007FFA3B720000-00007FFA3B730000 libgallery.dll 00007FFA3B700000-00007FFA3B711000 libpiwigo.dll 00007FFA35440000-00007FFA35468000 libashift.dll 00007FFA35420000-00007FFA35439000 libatrous.dll 00007FFA25E60000-00007FFA25E80000 libbasecurve.dll 00007FFA25E40000-00007FFA25E56000 libbasicadj.dll 00007FFA3B6F0000-00007FFA3B6FF000 libbilat.dll 00007FFA25E20000-00007FFA25E31000 libbilateral.dll 00007FFA3B680000-00007FFA3B68E000 libbloom.dll 00007FFA259E0000-00007FFA259F1000 libblurs.dll 00007FFA259C0000-00007FFA259D1000 libborders.dll 00007FFA259A0000-00007FFA259B6000 libcacorrect.dll 00007FFA36420000-00007FFA36430000 libcacorrectrgb.dll 00007FFA345B0000-00007FFA345BE000 libcensorize.dll 00007FFA261B0000-00007FFA261C0000 libchannelmixer.dll 00007FFA25970000-00007FFA2599D000 libchannelmixerrgb.dll 00007FFA25DD0000-00007FFA25DDE000 libclahe.dll 00007FFA25950000-00007FFA2596C000 libclipping.dll 00007FFA25940000-00007FFA2594E000 libcolisa.dll 00007FFA25920000-00007FFA2593B000 libcolorbalance.dll 00007FFA25900000-00007FFA2591B000 libcolorbalancergb.dll 00007FFA258E0000-00007FFA258FB000 libcolorchecker.dll 00007FFA258D0000-00007FFA258DE000 libcolorcontrast.dll 00007FFA258C0000-00007FFA258D0000 libcolorcorrection.dll 00007FFA258A0000-00007FFA258B9000 libcolorin.dll 00007FFA25890000-00007FFA2589E000 libcolorize.dll 00007FFA25870000-00007FFA25883000 libcolormapping.dll 00007FFA25850000-00007FFA25861000 libcolorout.dll 00007FFA25830000-00007FFA25843000 libcolorreconstruct.dll 00007FFA25820000-00007FFA2582F000 libcolortransfer.dll 00007FFA25800000-00007FFA2581C000 libcolorzones.dll 00007FFA257E0000-00007FFA257F4000 libcrop.dll 00007FFA257D0000-00007FFA257DF000 libdefringe.dll 00007FFA18AE0000-00007FFA18B2D000 libdemosaic.dll 00007FFA18AC0000-00007FFA18ADE000 libdenoiseprofile.dll 00007FFA18AA0000-00007FFA18AB5000 libdiffuse.dll 00007FFA25050000-00007FFA2505F000 libdither.dll 00007FFA18A80000-00007FFA18A95000 libequalizer.dll 00007FFA18A60000-00007FFA18A72000 libexposure.dll 00007FFA18A40000-00007FFA18A56000 libfilmic.dll 00007FFA18A10000-00007FFA18A3B000 libfilmicrgb.dll 00007FFA22150000-00007FFA2215D000 libfinalscale.dll 00007FFA1C5C0000-00007FFA1C5CF000 libflip.dll 00007FFA18A00000-00007FFA18A10000 libgamma.dll 00007FFA189F0000-00007FFA18A00000 libglobaltonemap.dll 00007FFA189D0000-00007FFA189E2000 libgraduatednd.dll 00007FFA189C0000-00007FFA189CE000 libgrain.dll 00007FFA189B0000-00007FFA189C0000 libhazeremoval.dll 00007FFA18990000-00007FFA189AA000 libhighlights.dll 00007FFA18980000-00007FFA1898E000 libhighpass.dll 00007FFA18970000-00007FFA1897F000 libhotpixels.dll 00007FFA18960000-00007FFA18970000 libinvert.dll 00007FFA18930000-00007FFA18951000 liblens.dll 00007FF9DA940000-00007FF9DA968000 liblensfun.dll 00007FFA15B20000-00007FFA15B2D000 libsystre-0.dll 00007FF9DA920000-00007FF9DA93F000 libtre-5.dll 00007FF9DA900000-00007FF9DA911000 liblevels.dll 00007FF9DA8E0000-00007FF9DA8FA000 libliquify.dll 00007FF9DA8C0000-00007FF9DA8D2000 liblowlight.dll 00007FF9FD250000-00007FF9FD260000 liblowpass.dll 00007FF9DA8A0000-00007FF9DA8C0000 liblut3d.dll 00007FF9C99B0000-00007FF9CA498000 libgmic.dll 00007FF9DA880000-00007FF9DA891000 libfftw3_threads-3.dll 00007FF9D9010000-00007FF9D9422000 libfftw3-3.dll 00007FF9DA820000-00007FF9DA87A000 libGraphicsMagick++-12.dll 00007FF9DA770000-00007FF9DA819000 libopencv_videoio-409.dll 4.9.0.0 00007FF9C9560000-00007FF9C99A9000 libopencv_core-409.dll 4.9.0.0 00007FF9C92E0000-00007FF9C9552000 avformat-60.dll 60.16.100.0 00007FF9C81B0000-00007FF9C92DC000 avutil-58.dll 58.29.100.0 00007FF9C6D00000-00007FF9C81AF000 avcodec-60.dll 60.31.102.0 00007FF9D8FF0000-00007FF9D900B000 libgstapp-1.0-0.dll 00007FF9D8F60000-00007FF9D8FE4000 libgstbase-1.0-0.dll 00007FF9D8ED0000-00007FF9D8F57000 libgstaudio-1.0-0.dll 00007FF9C6CE0000-00007FF9C6CFA000 libgstriff-1.0-0.dll 00007FF9C6C90000-00007FF9C6CD3000 libgstpbutils-1.0-0.dll 00007FF9C6B50000-00007FF9C6C90000 libgstreamer-1.0-0.dll 00007FF9C6AA0000-00007FF9C6B46000 swscale-7.dll 7.5.100.0 00007FF9C69D0000-00007FF9C6A9B000 libgstvideo-1.0-0.dll 00007FF9C68D0000-00007FF9C69D0000 OPENGL32.dll 10.0.22621.2506 00007FF9C6850000-00007FF9C68C8000 libopencv_imgcodecs-409.dll 4.9.0.0 00007FF9C61A0000-00007FF9C6845000 libopencv_imgproc-409.dll 4.9.0.0 00007FF9C6130000-00007FF9C6196000 libtbb12.dll 2021.11.0.0 00007FF9C60C0000-00007FF9C6123000 libbluray-2.dll 00007FF9C6060000-00007FF9C60B6000 libgme.dll 00007FF9C5E50000-00007FF9C6052000 libgnutls-30.dll 00007FF9C5D60000-00007FF9C5E4B000 libmodplug-1.dll 00007FF9C5D30000-00007FF9C5D5F000 librtmp-1.dll 00007FF9C5C60000-00007FF9C5D2B000 libsrt.dll 00007FF9C5C10000-00007FF9C5C52000 libva.dll 00007FF9C5B80000-00007FF9C5C03000 libssh.dll 00007FF9E81F0000-00007FF9E81FD000 libva_win32.dll 00007FF9C5B10000-00007FF9C5B7B000 libvpl.dll 2.10.0.0 00007FF9C5AF0000-00007FF9C5B07000 libgsm.dll 00007FF9C5A60000-00007FF9C5AE2000 libmp3lame-0.dll 00007FF9C5A20000-00007FF9C5A52000 libopencore-amrnb-0.dll 00007FF9C5A00000-00007FF9C5A20000 libopencore-amrwb-0.dll 00007FF9C5990000-00007FF9C59FB000 libopus-0.dll 00007FF9C5960000-00007FF9C5989000 libspeex-1.dll 00007FF9C2ED0000-00007FF9C595D000 libopenblas.dll 00007FF9C2EB0000-00007FF9C2ECF000 libtheoradec-1.dll 00007FF9C2E70000-00007FF9C2EAE000 libtheoraenc-1.dll 00007FF9C2DD0000-00007FF9C2E65000 libvorbisenc-2.dll 00007FF9C2AE0000-00007FF9C2DC4000 libx264-164.dll 0.164.3161.0 00007FF9C2770000-00007FF9C2ADF000 libvpx-1.dll 00007FF9C2640000-00007FF9C2761000 xvidcore.dll 00007FF9C2610000-00007FF9C2639000 swresample-4.dll 4.12.100.0 00007FF9C25B0000-00007FF9C260B000 liborc-0.4-0.dll 00007FF9C2560000-00007FF9C25AD000 libgsttag-1.0-0.dll 00007FF9C2530000-00007FF9C255D000 GLU32.dll 10.0.22621.2506 00007FF9C2470000-00007FF9C2528000 libbrotlienc.dll 00007FF9C23C0000-00007FF9C246D000 libgmp-10.dll 00007FFA48FC0000-00007FFA48FEE000 ncrypt.dll 10.0.22621.3007 00007FF9C2370000-00007FF9C23BC000 libhogweed-6.dll 00007FF9C2310000-00007FF9C2369000 libnettle-8.dll 00007FF9C22F0000-00007FF9C2310000 libtasn1-6.dll 00007FFA1FCD0000-00007FFA1FCD9000 WSOCK32.dll 10.0.22621.1 00007FF9C2180000-00007FF9C22E3000 libp11-kit-0.dll 00007FF9C2140000-00007FF9C2173000 libvorbis-0.dll 00007FF9C2120000-00007FF9C2133000 libogg-0.dll 00007FF9C2090000-00007FF9C2117000 libsoxr.dll 00007FF9C1D70000-00007FF9C2088000 libgfortran-5.dll 00007FF9C1D00000-00007FF9C1D64000 libquadmath-0.dll 00007FFA48F80000-00007FFA48FB7000 NTASN1.dll 10.0.22621.1 00007FF9E7B00000-00007FF9E7B0D000 libmask_manager.dll 00007FF9C1CE0000-00007FF9C1CF1000 libmonochrome.dll 00007FF9C1CC0000-00007FF9C1CD3000 libnegadoctor.dll 00007FF9E7AD0000-00007FF9E7ADE000 libnlmeans.dll 00007FF9E73B0000-00007FF9E73BF000 liboverexposed.dll 00007FF9E6A90000-00007FF9E6AA0000 libprofile_gamma.dll 00007FF9C1CA0000-00007FF9C1CB3000 librawdenoise.dll 00007FF9DB5D0000-00007FF9DB5DF000 librawoverexposed.dll 00007FF9C1C80000-00007FF9C1C92000 librawprepare.dll 00007FF9DA760000-00007FF9DA76E000 librelight.dll 00007FF9C1C60000-00007FF9C1C7E000 libretouch.dll 00007FF9C1C40000-00007FF9C1C5A000 librgbcurve.dll 00007FF9C1C20000-00007FF9C1C34000 librgblevels.dll 00007FF9DA690000-00007FF9DA69E000 librotatepixels.dll 00007FF9C1C10000-00007FF9C1C1E000 libscalepixels.dll 00007FF9C1BF0000-00007FF9C1C03000 libshadhi.dll 00007FF9C1BE0000-00007FF9C1BF0000 libsharpen.dll 00007FF9C1BD0000-00007FF9C1BDE000 libsoften.dll 00007FF9C1BC0000-00007FF9C1BD0000 libsplittoning.dll 00007FF9C1BA0000-00007FF9C1BB1000 libspots.dll 00007FF9C1A90000-00007FF9C1B9A000 libtemperature.dll 00007FF9C1A70000-00007FF9C1A89000 libtonecurve.dll 00007FF9C1A40000-00007FF9C1A69000 libtoneequal.dll 00007FF9C1A30000-00007FF9C1A3F000 libtonemap.dll 00007FF9C1A20000-00007FF9C1A2E000 libvelvia.dll 00007FF9C1A10000-00007FF9C1A1D000 libvibrance.dll 00007FF9C19F0000-00007FF9C1A01000 libvignette.dll 00007FF9C19D0000-00007FF9C19E4000 libwatermark.dll 00007FF9C19B0000-00007FF9C19C3000 libzonesystem.dll 00007FF9C19A0000-00007FF9C19AD000 libbackgroundjobs.dll 00007FF9C1980000-00007FF9C199B000 libcollect.dll 00007FF9C1960000-00007FF9C1971000 libcolorpicker.dll 00007FF9C1950000-00007FF9C1960000 libduplicate.dll 00007FF9C1930000-00007FF9C1947000 libexport.dll 00007FF9C1920000-00007FF9C192D000 libfilmstrip.dll 00007FF9C1910000-00007FF9C191F000 libfilter.dll 00007FF9C18F0000-00007FF9C1906000 libgeotagging.dll 00007FF9C18D0000-00007FF9C18F0000 libosmgpsmap-1.0-1.dll 00007FF9C18C0000-00007FF9C18CD000 libhinter.dll 00007FF9C18A0000-00007FF9C18B8000 libhistogram.dll 00007FF9C1880000-00007FF9C1894000 libhistory.dll 00007FF9C1870000-00007FF9C187D000 libimage_infos.dll 00007FF9C1860000-00007FF9C186D000 libioporder.dll 00007FF9C1850000-00007FF9C185D000 liblighttable_mode.dll 00007FF9C1840000-00007FF9C184F000 liblocation.dll 00007FF9C1820000-00007FF9C1831000 libmap_locations.dll 00007FF9C1810000-00007FF9C181E000 libmap_settings.dll 00007FF9C17F0000-00007FF9C1806000 libmasks.dll 00007FF9C17E0000-00007FF9C17ED000 libmasktoolbar.dll 00007FF9C17B0000-00007FF9C17D2000 libmenu.dll 00007FF9C17A0000-00007FF9C17AD000 libmenubuttons.dll 00007FF9C1780000-00007FF9C1791000 libmetadata.dll 00007FF9C1760000-00007FF9C1772000 libmetadata_view.dll 00007FF9C1740000-00007FF9C1759000 libmidi.dll 00007FF9C1720000-00007FF9C1731000 libportmidi.dll 00007FF9FCBA0000-00007FF9FCBC9000 winmmbase.dll 10.0.22621.1 00007FFA3F0E0000-00007FFA3F17D000 MMDevAPI.DLL 10.0.22621.2506 00007FF9F9A00000-00007FF9F9A46000 wdmaud.drv 10.0.22621.1 00007FFA417E0000-00007FFA417EB000 AVRT.dll 10.0.22621.2506 00007FFA2BD50000-00007FFA2BD59000 ksuser.dll 10.0.22621.1 00007FFA23350000-00007FFA2353C000 AUDIOSES.DLL 10.0.22621.2506 00007FFA2BD60000-00007FFA2BD6E000 msacm32.drv 10.0.22621.2506 00007FFA0ECB0000-00007FFA0ECCE000 MSACM32.dll 10.0.22621.1 00007FFA2BB90000-00007FFA2BB9B000 midimap.dll 10.0.22621.2506 00007FF9C1710000-00007FF9C171F000 libmodulegroups.dll 00007FF9C1700000-00007FF9C170D000 libmodule_toolbox.dll 00007FF9C16F0000-00007FF9C1700000 libnavigation.dll 00007FF9C16E0000-00007FF9C16EF000 libsnapshots.dll 00007FF9C16C0000-00007FF9C16D1000 libstyles.dll 00007FF9C16A0000-00007FF9C16BC000 libtagging.dll 00007FF9C1690000-00007FF9C169D000 libview_toolbox.dll 00007FF9C1680000-00007FF9C168D000 libpixbufloader-svg.dll 00007FFA48B80000-00007FFA48BE9000 mswsock.dll 10.0.22621.2506 00007FFA29840000-00007FFA298F0000 TextShaping.dll 10.0.22621.2506

Windows 10.0.22621.2506 DrMingw 0.9.11

Once Sony raw files are converted to dng using Adobe converter, exiv2 can't read marker notes from these dng file. I experienced the issue with all Sony A7 cameras I could test: A7III, A7IV, A7RII and A7RV.

Because of this issue, the lens correction with embedded meta-data doesn't work in darktable.

exiv2 --version
exiv2 0.27.6

exiv2 20221220-7R200672.dng | grep Vignett
Error: Directory Sony2 with 25665 entries considered invalid; not read.

exiv2 20221220-7R200672.dng | grep Chromatic
Error: Directory Sony2 with 25665 entries considered invalid; not read.

exiv2 20221220-7R200672.dng | grep Distortion
Error: Directory Sony2 with 25665 entries considered invalid; not read.

Exiftool in the other hand has no problem reading it:

exiftool 20221220-7R200672.dng | grep Vignetting
Vignetting Correction : Off
Vignetting Corr Params : 0 32 192 448 768 1184 1696 2304 3072 4000 5056 6240 7424 8576 9632 10688

exiftool 20221220-7R200672.dng | grep Chromatic
Lateral Chromatic Aberration : Off
Chromatic Aberration Corr Params: 781 731 689 656 632 621 610 586 554 511 466 429 389 339 273 223 -90 -56 -26 4 32 62 84 80 82 120 168 210 260 324 406 502

exiftool 20221220-7R200672.dng | grep "Distortion Corr"
Distortion Correction Setting : Off
Distortion Correction : None
Distortion Corr Params : 12 0 -36 -80 -144 -216 -312 -408 -524 -640 -772 -900 -1036 -1168 -1308 -1440
Distortion Corr Params Number : 16 (Full-frame)
Distortion Corr Params Present : Yes

It would probably be helpful if you posted a sample DNG.

  • Phil

Here is a link to a Sony A7RII dng converted file: https://www.dropbox.com/scl/fi/f9mkzqtjk0bm3noigjoi2/20221220-7R200672.dng?rlkey=mrqlkicgjrg1nb861twmbqnxm&dl=0

Description of the bug

This is a somewhat vague issue, not sure if it is a bug per-se, but I thought it may be worth discussing anyway.

Recently I upgraded from commit 1d3f83d to the latest release (which I think is ba4dc3), and have noticed a lot slower image processing and UI response. I haven't pinned down any particular module which is slow, it seems to be a general slowdown across many aspects of the editing workflow. My PC hardware, OS, and input images haven't changed so I would guess it has been some change to Ansel since a few months ago.

Some things I have noticed:

  • The UI often freezes for several seconds upon changing module parameters, even before the "working" prompt comes up (which I assume indicates the image pipeline processing).
  • Sometimes it seems like the image is reprocessing for no reason ("working" prompt is shown), like clicking on a module to expand it or changing tabs in module UIs.
  • General flakiness of image processing w.r.t. UI updates, e.g.:
  • Colour picker doesn't always update when selecting a point on the image.
  • Scopes don't always update.
  • Occasionally changing parameters or toggling a module on/off doesn't update the image.
  • When changing multiple module parameter in quick succession, seems like the image pipeline runs for each change, leading to long delays.
  • The image processing and UI performance issues get worse with more modules active (makes sense).

By the end of editing one image, the UI feels almost unusably slow. Often waiting several seconds after any UI interaction, whereas previously everything felt pretty snappy.

I don't know what's changed in the backend with Ansel recently, but I wonder if there has been any big changes to how UI events are handled and when we decide to rerun the image pipeline.

To Reproduce

Do any kind of regular image editing in Darkroom, e.g. exposure, filmic RGB, colour calibration, colour balance, local contrast. By the end the performance issues are apparent to me, as described above.

Expected behavior

Not expecting the performance to change so significantly given that there haven't been massive changes to the image processing features I use in my editing. My PC is not the most powerful but I was quite happy with Ansel's performance previously.

System

darktable version : unknown - whichever came with the Ansel install OS : Win10, version 22H2, build 19045.3930 Memory : 8GB, 2133 MHz Graphics card : Nvidia GeForce 920MX Graphics driver : GeForce game ready driver version 537.42 OpenCL installed : unknown OpenCL activated : No GTK+ : unknown gcc : unknown cflags : unknown CMAKE_BUILD_TYPE : unknown

Additional context

  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? Yes
  • Do you use lua scripts? No

There is nothing specific nor useful in this issue.

There is nothing specific nor useful in this issue.

Please let me know how I may better/properly report this significant usability regression.

Merge pull request #654 from LebedevRI/bitstreams

Significantly deduplicate Bit Stream implementations

Codecov Report

Attention: 7 lines in your changes are missing coverage. Please review.

Comparison is base (88138d4) 60.95% compared to head (0378e42) 60.97%.

Files Patch % Lines
src/librawspeed/bitstreams/BitVacuumer.h 76.92% 2 Missing and 1 partial :warning:
src/librawspeed/bitstreams/BitStreamerJPEG.h 71.42% 2 Missing :warning:
src/librawspeed/bitstreams/BitStreamer.h 91.66% 1 Missing :warning:
src/librawspeed/bitstreams/BitVacuumerJPEG.h 85.71% 1 Missing :warning:
@@ Coverage Diff @@
## develop #654 +/- ##
===========================================
+ Coverage 60.95% 60.97% +0.02% 
===========================================
 Files 268 260 -8 
 Lines 15967 15946 -21 
 Branches 2047 2044 -3 
===========================================
- Hits 9732 9723 -9 
+ Misses 6104 6095 -9 
+ Partials 131 128 -3 
Flag Coverage Δ
benchmarks 10.72% <42.10%> (+0.05%) :arrow_up:
integration 46.26% <45.71%> (+0.12%) :arrow_up:
linux 57.42% <88.88%> (+0.04%) :arrow_up:
macOS 24.21% <0.00%> (-0.05%) :arrow_down:
rpu_u 46.26% <45.71%> (+0.12%) :arrow_up:
unittests 21.68% <82.05%> (+0.05%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 19

Star 574

Actions: aurelienpierreeng/ansel

Actions

CI

Loading...

Sorry, something went wrong.

Create status badge

ci.yml

433 workflow runs

433 workflow runs

Could not load branches

Nothing to show

Fix GPU memory leak on restarted pixel pipeline

CI #644:

Pull request #324

opened by tajuma

February 9, 2024 18:36

Action required

tajuma:issue-317

tajuma:issue-317

February 9, 2024 18:36

Action required

View #324

View workflow file

Enable Exiv2 0.28.x compilation. Solving issue #8

CI #643:

Pull request #319

opened by lologor

January 21, 2024 22:42

Action required

lologor:01-2024

lologor:01-2024

January 21, 2024 22:42

Action required

View #319

View workflow file

Import Session : Removed issues for Windows and some other code adjustments

CI #642:

Pull request #294

synchronize by Jiyone

January 2, 2024 13:02

25m 25s

Jiyone:win_import

Jiyone:win_import

January 2, 2024 13:02

25m 25s

View #294

View workflow file

Import Session : Removed issues for Windows and some other code adjustments

CI #641:

Pull request #294

synchronize by Jiyone

January 2, 2024 13:00

3m 17s

Jiyone:win_import

Jiyone:win_import

January 2, 2024 13:00

3m 17s

View #294

View workflow file

Darkroom: dispatch mouse events to modules only if they are enabled

CI #640:

Commit af648b7

pushed by aurelienpierre

January 2, 2024 02:24

23m 1s

master

master

January 2, 2024 02:24

23m 1s

View workflow file

CI

CI #639:

by aurelienpierre

January 2, 2024 01:24

23m 13s

master

master

January 2, 2024 01:24

23m 13s

View workflow file

CI

CI #638:

by aurelienpierre

January 2, 2024 01:24

5s

master

master

January 2, 2024 01:24

5s

View workflow file

Darkroom: do not fire the gui_post_expose callback if the module is n…

CI #637:

Commit f57be8c

pushed by aurelienpierre

January 2, 2024 01:21

3m 1s

master

master

January 2, 2024 01:21

3m 1s

View workflow file

Develop.c: minor refactoring for future reuse

CI #636:

Commit fe294d4

pushed by aurelienpierre

January 1, 2024 23:40

26m 1s

master

master

January 1, 2024 23:40

26m 1s

View workflow file

Bauhaus: use the max height of a single text line as a minimum for mi…

CI #635:

Commit 791da58

pushed by aurelienpierre

January 1, 2024 17:55

24m 39s

master

master

January 1, 2024 17:55

24m 39s

View workflow file

Add darktable as credit for homebrew build

CI #634:

Commit 6c57f16

pushed by aurelienpierre

January 1, 2024 17:43

12m 50s

master

master

January 1, 2024 17:43

12m 50s

View workflow file

Translations: add updating script

CI #633:

Commit cc40d8b

pushed by aurelienpierre

January 1, 2024 17:31

12m 54s

master

master

January 1, 2024 17:31

12m 54s

View workflow file

CI

CI #632:

Manually run

by aurelienpierre

January 1, 2024 15:23

23m 36s

master

master

January 1, 2024 15:23

23m 36s

View workflow file

CI

CI #631:

Manually run

by aurelienpierre

January 1, 2024 02:09

14m 10s

master

master

January 1, 2024 02:09

14m 10s

View workflow file

Ashift: ensure new crop is computed after other params are changed

CI #630:

Commit 50f6da4

pushed by aurelienpierre

December 31, 2023 17:39

25m 51s

master

master

December 31, 2023 17:39

25m 51s

View workflow file

Perspective correction : massive GUI overhaul

CI #629:

Commit a2e22cc

pushed by aurelienpierre

December 31, 2023 00:06

23m 20s

master

master

December 31, 2023 00:06

23m 20s

View workflow file

Bauhaus: increase line height

CI #628:

Commit cfe1bc9

pushed by aurelienpierre

December 30, 2023 22:50

23m 4s

master

master

December 30, 2023 22:50

23m 4s

View workflow file

Highlights: allow noise level up to 1 in GUI

CI #627:

Commit 01cbea2

pushed by aurelienpierre

December 30, 2023 20:42

24m 50s

master

master

December 30, 2023 20:42

24m 50s

View workflow file

Import Session : Removed issues for Windows and some other code adjustments

CI #626:

Pull request #294

synchronize by Jiyone

December 30, 2023 12:13

23m 26s

Jiyone:win_import

Jiyone:win_import

December 30, 2023 12:13

23m 26s

View #294

View workflow file

Import Session : Removed issues for Windows and some other code adjustments

CI #622:

Pull request #294

synchronize by Jiyone

December 29, 2023 11:20

23m 17s

Jiyone:win_import

Jiyone:win_import

December 29, 2023 11:20

23m 17s

View #294

View workflow file

MacOS packaging based on homebrew

CI #620:

Pull request #299

synchronize by lologor

December 29, 2023 00:57

24m 7s

lologor:12-2023

lologor:12-2023

December 29, 2023 00:57

24m 7s

View #299

View workflow file

Crop: add image_update_final_size in commit

CI #618:

Commit 807bd55

pushed by aurelienpierre

December 28, 2023 22:41

25m 42s

master

master

December 28, 2023 22:41

25m 42s

View workflow file

Import Session : Removed issues for Windows and some other code adjustments

CI #617:

Pull request #294

synchronize by Jiyone

December 28, 2023 22:24

24m 7s

Jiyone:win_import

Jiyone:win_import

December 28, 2023 22:24

24m 7s

View #294

View workflow file

import: Allow BMP files

CI #616:

Commit 4b54a2c

pushed by aurelienpierre

December 28, 2023 16:18

24m 15s

master

master

December 28, 2023 16:18

24m 15s

View workflow file

import: Allow BMP files

CI #615:

Pull request #300

opened by AlynxZhou

December 28, 2023 15:26

23m 48s

AlynxZhou:add-bmp-import

AlynxZhou:add-bmp-import

December 28, 2023 15:26

23m 48s

View #300

View workflow file

Previous 1 2 3 4 5 … 17 18 Next

You can’t perform that action at this time.

Merge pull request #653 from kmilos/hassy_cfv100c

Add Hasselblad CFV 100C aliases

Lens details: https://www.voigtlaender.de/lenses/vm/35-mm-12-0-ultron-aspherical-ii/?lang=en

Would love for this to be added to the lens fun database.

Thank you.

If you have the lens, take calibration-suitable pictures. Then you can run calibration and send results to us or send us calibration images and we will try to calibrate the lens. Please see: https://lensfun.github.io/calibration/

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (4e9c971) 60.95% compared to head (92bfa1d) 60.95%.

@@ Coverage Diff @@
## develop #653 +/- ##
========================================
 Coverage 60.95% 60.95% 
========================================
 Files 268 268 
 Lines 15967 15967 
 Branches 2047 2047 
========================================
 Hits 9732 9732 
 Misses 6104 6104 
 Partials 131 131 
Flag Coverage Δ
benchmarks 10.67% <ø> (ø)
integration 46.14% <ø> (ø)
linux 57.37% <ø> (ø)
macOS 24.26% <ø> (ø)
rpu_u 46.14% <ø> (ø)
unittests 21.62% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Wow, CI is so green.

@kmilos thank you!

Merge pull request #652 from LebedevRI/bitvacuumer-crtp

BitVacuumer CRTP / source tree reorg

Merge remote-tracking branch 'upstream/pr/651' into develop

  • upstream/pr/651: Pentax Q-S1 normalization

Merge remote-tracking branch 'upstream/pr/650' into develop

  • upstream/pr/650: Hasselblad CFV-50c alias cleanup

Pentax Q-S1 normalization

Also drop no-samples attribute from Q10

What is the current problem you are facing ? I started moving from LrC to darktable and got some problems when trying to ask for help to improve the software, after few rounds I start search again and find the project, I will list here the issues and trying to summarise what I want to do:

  • darktable-org/darktable/issues/16252
  • darktable-org/darktable/issues/16253
  • darktable-org/darktable/issues/16255

Where in your workflow does your problem occur ? I want to automate the ingestion process using more headless or paralel as possible, for me transform everything into cli as possible make possible to create some automations and if possible to run in another machine some will be better, this is why I open 3 issues on darktable, to be able to automate you need to have cli, to use the cli with gui, will need parallel, if you need to have a smooth/fast operation you need to divide to conquer.

Additional context Please if you want help or have a better tool, I don't want to use another tools because compat problems that can be happen because of how everyone write to XMP.

If you need help in some way or have some advice or expand the discussion.

What is the end goal of automation here ?

That's not possible as of now, because Ansel relies on a SQLite database that can't be accessed concurrently from different processes while preventing data corruption. Plus a good deal of presets and default settings is done in GUI code. CLI usage is only meant to batch-export pictures that have already been edited.

What is the end goal of automation here ?

That's not possible as of now, because Ansel relies on a SQLite database that can't be accessed concurrently from different processes while preventing data corruption. Plus a good deal of presets and default settings is done in GUI code. CLI usage is only meant to batch-export pictures that have already been edited.

Automate ingestion process mainly to build local copy and thumb cache.

Building the thumb cache has already a CLI command for the whole DB and a GUI control from global menu. As for local copies, I don't know. But for both, parallel processing is not needed.

Building the thumb cache has already a CLI command for the whole DB and a GUI control from global menu. As for local copies, I don't know. But for both, parallel processing is not needed.

thanks!

Merge pull request #2141 from payano/update_readme.md

Add MSYS2 to README.md

Merge pull request #2165 from kmilos/patch-2

Add Pentax Q-S1

We could drop "no-samples" on both this one and the Q10, as this is only about naming scheme for regular DNGs...

Alternatively, if we really want to solicit even more OOC DNG samples, we might want to add "no-samples" to the Leicas added recently?

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (c2c0d9f) 60.95% compared to head (ca97ed5) 60.95%.

@@ Coverage Diff @@
## develop #651 +/- ##
========================================
 Coverage 60.95% 60.95% 
========================================
 Files 267 267 
 Lines 15967 15967 
 Branches 2047 2047 
========================================
 Hits 9732 9732 
 Misses 6104 6104 
 Partials 131 131 
Flag Coverage Δ
benchmarks 10.67% <ø> (ø)
integration 46.13% <ø> (ø)
linux 57.37% <ø> (ø)
macOS 24.29% <ø> (ø)
rpu_u 46.13% <ø> (ø)
unittests 21.62% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

We could drop "no-samples" on both this one and the Q10, as this is only about naming scheme for regular DNGs...

Yes, please. It was an error to put (unknown-)?no-samples on DNG-only cameras.

@kmilos thank you!

The id attribute is what gets shown in dt UI (if present and different from the actual value), we only want one model for all body combos.

Hasselblad Phocus software also normalizes to just "Hasselblad CFV-50c" when converting any of these from 3FR to FFF.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (c2c0d9f) 60.95% compared to head (f0bdbe7) 60.95%.

@@ Coverage Diff @@
## develop #650 +/- ##
========================================
 Coverage 60.95% 60.95% 
========================================
 Files 267 267 
 Lines 15967 15967 
 Branches 2047 2047 
========================================
 Hits 9732 9732 
 Misses 6104 6104 
 Partials 131 131 
Flag Coverage Δ
benchmarks 10.67% <ø> (ø)
integration 46.13% <ø> (ø)
linux 57.37% <ø> (ø)
macOS 24.29% <ø> (ø)
rpu_u 46.13% <ø> (ø)
unittests 21.62% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

I guess this makes sense. This is so horrible :)

@kmilos thank you!

Hasselblad CFV-50c alias cleanup

The id is what gets shown in dt UI, we only want one model for all body combos

Hasselblad Phocus software also normalizes to just "Hasselblad CFV-50c" when converting any of these from 3FR to FFF

Merge pull request #649 from LebedevRI/broken-3fr

Support .3FR from old Hasselblad cameras

Add more Hasselblad CFV-50c alias

See https://github.com/darktable-org/rawspeed/pull/622#issuecomment-1933040200

AbstractLJpegDecoder: Hasselblad implicit EOI after Scan erratum

Fixes https://github.com/darktable-org/rawspeed/pull/622#issuecomment-1933133306

Those didn't produce valid/standard-compliant LJpeg stream, there is no EOI marker at the end, so just stop after decoding the Scan.

I looked through all the lists I could find of supported cameras and did not see any Ubiquiti Unifi cameras supported. I'm not sure if that's just a lack of interest or if I'm misunderstanding how this project works. Perhaps I need to find what sensor the camera is using? Though not sure how to do that, googling just returns "5MP CMOS" for sensor information.

Any guidance or help would be great. These cameras are great but they present a fisheye and I'd love to be able to use ffmpeg to dewarp the image for use with homekit or other services using scrypted.

Thanks again for your time and apologies if this is not the appropriate place for this.

I guess the project is mostly focused on photography cameras (slr, mirrorless, phones, etc.) but maintainers know better. Calibration itself is not a super simple procedure, especially if you want to get some sort of precise calibration and it's hard to cover hundreds or thousands of security cameras for almost no reasons, because most of the time nobody cares about and distortions for security cameras.

As about Ubiquiti cameras - it has pretty good specification with all the parameters like sensor size, field of view, basic lens parameters. For example https://techspecs.ui.com/unifi/cameras-nvrs/uvc-g5-pro has 1/2" sensor and thus crop factor 5.3 and using this crop factor and focal length (4.1) you can do a calibration. Here is an example how to do it https://pixls.us/articles/create-lens-calibration-data-for-lensfun/ (taken from this discussion https://github.com/lensfun/lensfun/discussions/1781)

Upd.: another link with calibration tutorials https://lensfun.github.io/calibration/ Upd. #2: as far as I know, lensfun does not perform reprojection, which is required to make fisheye projection rectilinear. It only corrects barrel distortion making lines straight in a current projection (which is not necessarily straight for fisheye). But I'm not 100% sure how lensfun works with different kind of projection. probably it treats everything as a rectilinear.

Also prettify all Q series model names.

Just a note of support for the request to add Pentax Q-S1.

Merge remote-tracking branch 'upstream/pr/647' into develop

  • upstream/pr/647: Leica V-Lux 5 support

Merge remote-tracking branch 'upstream/pr/648' into develop

  • upstream/pr/648: Leica V-Lux 4 support

Merge remote-tracking branch 'upstream/pr/645' into develop

  • upstream/pr/645: Add some more unknown cameras

Merge remote-tracking branch 'upstream/pr/646' into develop

  • upstream/pr/646: Leica S (Typ 007), S2, and S3 normalization Leica X2: remove superfluous color matrix Leica T (Typ 701), TL, and TL2 normalization Leica X and X-U (Typ 113) normalization Leica X Vario (Typ 107) normalization Leica X1 normalization Leica M and M-D (Typ 262) normalization

Enable code coverage and write test cover all code in the code base.

Here is codecov example. (I hope it is publicly visible) https://github.com/CAHEK7/lensfun/tree/codecov https://app.codecov.io/gh/CAHEK7/lensfun

Coverage build itself is already here. Only github action is required and repository admin access to properly setup codecov token and app.

Using RPU sample.

Said to be equivalent to DMC-FZ200.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (bec30b6) 60.92% compared to head (fa9e28e) 60.92%.

@@ Coverage Diff @@
## develop #648 +/- ##
========================================
 Coverage 60.92% 60.92% 
========================================
 Files 266 266 
 Lines 15960 15960 
 Branches 2046 2046 
========================================
 Hits 9723 9723 
 Misses 6105 6105 
 Partials 132 132 
Flag Coverage Δ
benchmarks 10.66% <ø> (ø)
integration 46.11% <ø> (ø)
linux 57.33% <ø> (ø)
macOS 24.28% <ø> (ø)
rpu_u 46.11% <ø> (ø)
unittests 21.63% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@kmilos thank you!

Confirmed equivalent to DMC-FZ1000M2

RPU sample sucks, i can not verify crop at all.

@kmilos thank you.

Leica X2: remove superfluous color matrix

It is read from the DNG directly, and verified to be the same.

AFAICT, these cameras output only DNG, so this is just to normalize the make and model strings.

Leica M (Typ 262) Leica M-D (Typ 262) Leica S (Typ 007) Leica S2 Leica S3 Leica T (Typ 701) Leica TL Leica TL2 Leica X1 Leica X Vario (Typ 107) Leica X (Typ 113) Leica X-U (Typ 113)

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (bec30b6) 60.92% compared to head (3be16ff) 60.92%.

@@ Coverage Diff @@
## develop #646 +/- ##
========================================
 Coverage 60.92% 60.92% 
========================================
 Files 266 266 
 Lines 15960 15960 
 Branches 2046 2046 
========================================
 Hits 9723 9723 
 Misses 6105 6105 
 Partials 132 132 
Flag Coverage Δ
benchmarks 10.66% <ø> (ø)
integration 46.11% <ø> (ø)
linux 57.33% <ø> (ø)
macOS 24.28% <ø> (ø)
rpu_u 46.11% <ø> (ø)
unittests 21.63% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@kmilos thank you! I have not verified that these changes are correct, only did a visual inspection of the diff.

I managed to find quite a few of these DNG samples on either DPReview or PhotographyBlog to test, but not all...

Add some more unknown cameras

Canon EOS 60Da Canon EOS-1D C Canon PowerShot Pro90 IS Fujifilm FinePix F710 Fujifilm FinePix F800EXR Fujifilm FinePix F810 Fujifilm FinePix S20Pro Fujifilm FinePix S5100 Fujifilm FinePix S9100 Leica D-Lux 2 Leica V-LUX 3 Leica V-LUX 3 Nikon D810A

Canon EOS 60Da Canon EOS-1D C Canon PowerShot Pro90 IS Fujifilm FinePix F710 Fujifilm FinePix F800EXR Fujifilm FinePix F810 Fujifilm FinePix S20Pro Fujifilm FinePix S5100 Fujifilm FinePix S9100 Leica D-Lux 2 Leica V-LUX 2 Leica V-LUX 3 Nikon D810A

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (bec30b6) 60.92% compared to head (cabecdb) 60.92%.

@@ Coverage Diff @@
## develop #645 +/- ##
========================================
 Coverage 60.92% 60.92% 
========================================
 Files 266 266 
 Lines 15960 15960 
 Branches 2046 2046 
========================================
 Hits 9723 9723 
 Misses 6105 6105 
 Partials 132 132 
Flag Coverage Δ
benchmarks 10.66% <ø> (ø)
integration 46.11% <ø> (ø)
linux 57.33% <ø> (ø)
macOS 24.28% <ø> (ø)
rpu_u 46.11% <ø> (ø)
unittests 21.63% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@kmilos thank you! Searched each name in RPU backlog, no matches.

meson CI: remove clang32

Upstream has deprecated it.

Signed-off-by: Rosen Penev

Upstream has deprecated it.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (c8d1192) 63.93% compared to head (fa18c06) 63.93%.

@@ Coverage Diff @@
## main #2912 +/- ##
=======================================
 Coverage 63.93% 63.93% 
=======================================
 Files 104 104 
 Lines 22400 22400 
 Branches 10877 10877 
=======================================
 Hits 14322 14322 
 Misses 5854 5854 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Add Nikkor AF-S 35mm f/1.4G

From upload 5d73bf

Merge pull request #644 from LebedevRI/bitstreamerjpeg-track-marker-pos

Make BitStreamerJPEG Great Again!

Revert "LJpegDecompressor::decodeN(): skip padding before restart interval marker"

Unless there is a case where it matters for correctness, let's not do that as it's slower:

Comparing /home/lebedevri/rawspeed/build-Clang17-release/src/utilities/rsbench/rsbench-old to /home/lebedevri/rawspeed/build-Clang17-release/src/utilities/rsbench/rsbench
Benchmark Time CPU Time Old Time New CPU Old CPU New
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Samsung/Galaxy S21 Ultra/20230712_115041.dng/threads:1/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
Samsung/Galaxy S21 Ultra/20230712_115041.dng/threads:1/process_time/real_time_mean -0.0120 -0.0120 150 148 150 148
Samsung/Galaxy S21 Ultra/20230712_115041.dng/threads:1/process_time/real_time_median -0.0121 -0.0120 150 148 150 148
Samsung/Galaxy S21 Ultra/20230712_115041.dng/threads:1/process_time/real_time_stddev -0.2048 -0.2032 0 0 0 0
Samsung/Galaxy S21 Ultra/20230712_115041.dng/threads:1/process_time/real_time_cv -0.1952 -0.1936 0 0 0 0
Samsung/Galaxy S23 Ultra/20231214_130645.dng/threads:1/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
Samsung/Galaxy S23 Ultra/20231214_130645.dng/threads:1/process_time/real_time_mean -0.0248 -0.0248 829 808 829 808
Samsung/Galaxy S23 Ultra/20231214_130645.dng/threads:1/process_time/real_time_median -0.0242 -0.0242 828 808 828 808
Samsung/Galaxy S23 Ultra/20231214_130645.dng/threads:1/process_time/real_time_stddev -0.7370 -0.7363 2 0 2 0
Samsung/Galaxy S23 Ultra/20231214_130645.dng/threads:1/process_time/real_time_cv -0.7303 -0.7296 0 0 0 0
OVERALL_GEOMEAN -0.0184 -0.0184 0 0 0 0

This reverts commit cb3a864da3657e4b9f88db23d08835cdf23b6038.

LJpegDecompressor: getStreamPosition() is usable, -12% runtime!

Comparing /home/lebedevri/rawspeed/build-Clang17-release/src/utilities/rsbench/rsbench-old to /home/lebedevri/rawspeed/build-Clang17-release/src/utilities/rsbench/rsbench
Benchmark Time CPU Time Old Time New CPU Old CPU New
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:1/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:1/process_time/real_time_mean -0.1499 -0.1499 105 89 105 89
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:1/process_time/real_time_median -0.1559 -0.1559 105 89 105 88
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:1/process_time/real_time_stddev +11.2087 +11.1990 0 1 0 1
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:1/process_time/real_time_cv +13.3613 +13.3500 0 0 0 0
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:1/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:1/process_time/real_time_mean -0.1219 -0.1219 138 121 138 121
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:1/process_time/real_time_median -0.1218 -0.1218 138 121 138 121
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:1/process_time/real_time_stddev +0.2877 +0.2760 0 0 0 0
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:1/process_time/real_time_cv +0.4664 +0.4531 0 0 0 0
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:1/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:1/process_time/real_time_mean -0.0948 -0.0948 298 270 298 270
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:1/process_time/real_time_median -0.0940 -0.0940 298 270 298 270
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:1/process_time/real_time_stddev -0.5894 -0.5890 1 0 1 0
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:1/process_time/real_time_cv -0.5463 -0.5460 0 0 0 0
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:1/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:1/process_time/real_time_mean -0.2091 -0.2091 79 62 79 62
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:1/process_time/real_time_median -0.2093 -0.2093 79 62 79 62
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:1/process_time/real_time_stddev +0.5261 +0.5346 0 0 0 0
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:1/process_time/real_time_cv +0.9297 +0.9404 0 0 0 0
./Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:1/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:1/process_time/real_time_mean -0.1273 -0.1273 171 150 171 150
./Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:1/process_time/real_time_median -0.1273 -0.1273 171 150 171 150
./Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:1/process_time/real_time_stddev +0.4437 +0.4470 0 0 0 0
./Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:1/process_time/real_time_cv +0.6543 +0.6581 0 0 0 0
Samsung/Galaxy S21 Ultra/20230712_115041.dng/threads:1/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
Samsung/Galaxy S21 Ultra/20230712_115041.dng/threads:1/process_time/real_time_mean -0.1328 -0.1328 173 150 173 150
Samsung/Galaxy S21 Ultra/20230712_115041.dng/threads:1/process_time/real_time_median -0.1345 -0.1345 173 150 173 150
Samsung/Galaxy S21 Ultra/20230712_115041.dng/threads:1/process_time/real_time_stddev +7.6826 +7.8204 0 1 0 1
Samsung/Galaxy S21 Ultra/20230712_115041.dng/threads:1/process_time/real_time_cv +9.0117 +9.1706 0 0 0 0
Samsung/Galaxy S23 Ultra/20231214_130645.dng/threads:1/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
Samsung/Galaxy S23 Ultra/20231214_130645.dng/threads:1/process_time/real_time_mean -0.0463 -0.0463 869 829 869 829
Samsung/Galaxy S23 Ultra/20231214_130645.dng/threads:1/process_time/real_time_median -0.0463 -0.0463 869 829 869 829
Samsung/Galaxy S23 Ultra/20231214_130645.dng/threads:1/process_time/real_time_stddev +0.4310 +0.3955 0 0 0 0
Samsung/Galaxy S23 Ultra/20231214_130645.dng/threads:1/process_time/real_time_cv +0.5005 +0.4632 0 0 0 0
OVERALL_GEOMEAN -0.1272 -0.1272 0 0 0 0

Cr2Decompressor: getStreamPosition() is usable, -13% runtime!

Comparing /home/lebedevri/rawspeed/build-Clang17-release/src/utilities/rsbench/rsbench-old to /home/lebedevri/rawspeed/build-Clang17-release/src/utilities/rsbench/rsbench
Benchmark Time CPU Time Old Time New CPU Old CPU New
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_mean -0.1058 -0.1057 27 24 27 24
EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_median -0.1054 -0.1053 27 24 27 24
EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_stddev +1.3100 +1.3476 0 0 0 0
EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_cv +1.5832 +1.6252 0 0 0 0
EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_mean -0.1433 -0.1426 107 91 3396 2912
EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_median -0.1448 -0.1431 107 91 3395 2909
EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_stddev +0.1930 -0.0059 0 0 12 12
EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_cv +0.3925 +0.1595 0 0 0 0
EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_mean -0.0961 -0.0961 56 51 56 51
EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_median -0.0960 -0.0960 56 51 56 51
EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_stddev -0.5090 -0.5048 0 0 0 0
EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_cv -0.4568 -0.4522 0 0 0 0
EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_mean -0.1958 -0.1958 235 189 235 189
EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_median -0.1960 -0.1959 235 189 235 189
EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_stddev -0.5610 -0.5659 0 0 0 0
EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_cv -0.4541 -0.4602 0 0 0 0
EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0081 U Test, Repetitions: 9 vs 9
EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_mean -0.1165 -0.0062 278 245 7018 6975
EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_median -0.1158 -0.0040 278 245 7011 6983
EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_stddev -0.1364 -0.0213 1 1 31 30
EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_cv -0.0226 -0.0152 0 0 0 0
EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_mean -0.0958 -0.0958 158 143 158 143
EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_median -0.0954 -0.0955 158 143 158 143
EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_stddev -0.3255 -0.3224 0 0 0 0
EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_cv -0.2540 -0.2506 0 0 0 0
EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_mean -0.1523 -0.1523 156 132 156 132
EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_median -0.1517 -0.1517 155 132 155 132
EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_stddev -0.5352 -0.5282 0 0 0 0
EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_cv -0.4517 -0.4434 0 0 0 0
OVERALL_GEOMEAN -0.1300 -0.1152 0 0 0 0

BitStreamerJPEG: track position of the 'end-of-stream' marker

This time the input position remains as-is, so the termination guarantee remains unaffected.

Merge remote-tracking branch 'upstream/pr/636' into develop

By Miloš Komarčević

Via Miloš Komarčević

  • upstream/pr/636: DngDecoder: default white level for floats is simply 1

Merge pull request #643 from LebedevRI/oss-fuzz-msan

oss-fuzz: re-enable openmp for msan build

oss-fuzz: just don't delete build directory afterwards

coverage build is failing because libomp sources are in build directory, which we delete. While we could change the location of sources, perhaps just keeping the whole build dir is fine too?

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (1487e4d) 60.76% compared to head (019911b) 60.76%.

@@ Coverage Diff @@
## develop #643 +/- ##
========================================
 Coverage 60.76% 60.76% 
========================================
 Files 266 266 
 Lines 15946 15946 
 Branches 2047 2047 
========================================
 Hits 9689 9689 
 Misses 6128 6128 
 Partials 129 129 
Flag Coverage Δ
benchmarks 10.53% <ø> (ø)
integration 45.99% <ø> (ø)
linux 57.24% <ø> (ø)
macOS 24.29% <ø> (ø)
rpu_u 45.99% <ø> (ø)
unittests 21.44% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Description of the bug

The visual mapping button, mentioned in the documentation, does not seem to be present, and I cannot see a way to edit/create keyboard shortcuts? (Short of manually editing the config files.)

To Reproduce

  1. Open Ansel
  2. Try to find the visual mapping button

Expected behavior

Visual mapping button is present, as described in the documentation.

Context

Screenshots image

Which commit introduced the error

Unknown.

System

  • ansel version : 0.0.0+729~ge2c4a0a60
  • OS : Arch Linux (kernel 6.7.3-zen1-1-zen)
  • Linux - Distro : See above
  • Memory : 64 GB
  • Graphics card : AMD 6900 XT
  • Graphics driver : Mesa
  • OpenCL installed : Yes
  • OpenCL activated : Yes
  • Xorg : N/A
  • Desktop : KDE Plasma (Wayland)
  • GTK+ : unknown
  • gcc : 13.2.1 20230801
  • cflags : see build command below
  • CMAKE_BUILD_TYPE : see build command below

Ansel compile/build commands (from the AUR package):

 cmake -B build \
 -DCMAKE_INSTALL_PREFIX=/usr \
 -DCMAKE_INSTALL_LIBDIR=lib \
 -DCMAKE_INSTALL_LIBEXECDIR=lib \
 -DCMAKE_BUILD_TYPE=Release \
 -DCMAKE_SKIP_RPATH=ON \
 -DBINARY_PACKAGE_BUILD=ON \
 -DUSE_LIBSECRET=ON \
 -DUSE_LUA=ON \
 -DUSE_BUNDLED_LUA=OFF \
 -DUSE_LIBRAW=ON \
 -DUSE_BUNDLED_LIBRAW=OFF \
 -DUSE_COLORD=ON \
 -DBUILD_CURVE_TOOLS=ON \
 -DBUILD_NOISE_TOOLS=ON \
 -DRAWSPEED_ENABLE_LTO=ON

cmake --build build

Ansel compile options:

compile options:
 bit depth is 64 bit
 normal build
 SSE2 optimized codepath enabled
 OpenMP support enabled
 OpenCL support enabled
 Lua support enabled, API version 8.0.0
 Colord support enabled
 GraphicsMagick support enabled
 ImageMagick support disabled
 OpenEXR support enabled

Additional context

  • Can you reproduce with another darktable version(s)? unknown
  • Can you reproduce with a RAW or Jpeg or both? N/A
  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes
  • If the issue is with the output image, attach an XMP file if (you'll have to change the extension to .txt)
  • Is the issue still present using an empty/new config-dir (e.g. start darktable¹ with --configdir "/tmp")? yes
  • Do you use lua scripts? no
  • What lua scripts start automatically? N/A
  • What lua scripts were running when the bug occurred? N/A

¹ darktable here I'm assuming is supposed to be ansel.

Keyboard shortcuts are temporarily removed.

It's an example of modifications, related to #2161. Partial fix for #2162. Example of how #2142 can be fixed and probably #2138 can be updated.

Deprecated lf_mount_new has been removed from cpp file (header part was done in https://github.com/lensfun/lensfun/pull/2134). I think the others C-API lf_*_new functions should be removed from cpp files too, but in separated pull requests.

If that example looks good, I can proceed with the other classes step by step.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

lensfun

/

lensfun

Public

Notifications

Fork 186

Star 584

Pull requests: lensfun/lensfun

New pull request New

15 Open

466 Closed

15 Open

466 Closed

No reviews

Review required

Approved review

Changes requested

Assigned to nobody

Newest

Oldest

Most commented

Least commented

Recently updated

Least recently updated

Best match

Most reactions

👍

👎

😄

🎉

😕

❤️

🚀

👀

Pull requests list

Modernize lfMount

2163

opened Feb 3, 2024 by CAHEK7

Loading…

16

Add C interface for EnableDistortionCorrection

2160

opened Feb 2, 2024 by CAHEK7

Loading…

CI: add macOS job

2159

opened Feb 2, 2024 by kmilos

Loading…

3

Update some Sigma and Tamron strings

2147

opened Jan 12, 2024 by kmilos

Loading…

1

Use smart pointers for lfLens Calibrations

2138

opened Jan 3, 2024 by payano

Loading…

12

replace "sourceforge.net" with "lensfun.github.io" in lensfun-update-data

2133

opened Dec 13, 2023 by jonas-ott

Loading…

1

Add tool for generating database updates

2132

opened Dec 13, 2023 by jonas-ott

Loading…

fix year 2038 problem

2130

opened Dec 5, 2023 by jonas-ott

Loading…

8

Add vignetting for Samyang 12mm AF

2004

opened Mar 10, 2023 by interru

Loading…

3

Added model code SAL18135 to name of Sony DT 18-135mm f/3.5-5.6 SAM

1992

opened Feb 24, 2023 by tomalakgeretkal

Draft

corrected name for the sigma 150-600mm contemporary

1588

opened Jan 18, 2022 by benklop

Loading…

1

12

Match library file name with soname

1351

opened Jan 26, 2021 by mikhailnov

Loading…

Stable release

3

Make plots more readable

1166

opened Apr 27, 2020 by uchrisu

Loading…

2

Give User-Profiles always priority

1151

opened Apr 24, 2020 by uchrisu

Loading…

Stable release

1

1

Another name for the Sigma 18-300mm

1039

opened Nov 18, 2019 by goddisignz

Loading…

3

ProTip! Updated in the last three days: updated:>2024-02-23 .

You can’t perform that action at this time.

The call of vector.clear() does not free all the pointers held by the vector. And since it's an assignment operator, the vector may contain some allocated pointers. For example lfMount: https://github.com/lensfun/lensfun/blob/20cbe397dc8e11eea8ea0a7ac4ca42b2b2413f34/libs/lensfun/mount.cpp#L43

The same problem exists for the lfLens assignment operator as well: https://github.com/lensfun/lensfun/blob/20cbe397dc8e11eea8ea0a7ac4ca42b2b2413f34/libs/lensfun/lens.cpp#L101

While here the pointers are handled properly: https://github.com/lensfun/lensfun/blob/20cbe397dc8e11eea8ea0a7ac4ca42b2b2413f34/libs/lensfun/lens.cpp#L111

Also that vector.clear() statement is not required for the copy constructors since in that case all the vectors are empty.

Related to #2161 if smart pointers will be used.

In https://github.com/lensfun/lensfun/issues/2142 I mentioned that the current codebase while being compiled with C++14 contains lots of pure C leftovers. One of the example uncovered in the following comment https://github.com/lensfun/lensfun/pull/2138#discussion_r1477043594 due to unnecessary C-style reinterpret cast at https://github.com/lensfun/lensfun/blob/20cbe397dc8e11eea8ea0a7ac4ca42b2b2413f34/libs/lensfun/lens.cpp#L1317

  • [ ] get rid of C-style casts, replace it with appropriate C++ casts or even remove them completely
  • [ ] get rid of pointer based operations while C++ containers are available. For example https://github.com/lensfun/lensfun/blob/20cbe397dc8e11eea8ea0a7ac4ca42b2b2413f34/libs/lensfun/mount.cpp#L30
  • [ ] consider to use smart pointers wherever possible, take care of backward compatibility
  • [ ] implement proper move constructors and operators
  • [ ] (questionable) use std library instead of glib or own implementations
  • [ ] (questionable) use std::string if possible instead of raw char *, take care of backward compatibility

Merge pull request #640 from LebedevRI/oss-fuzz-bundled-omp

oss-fuzz: OpenMP-aware fuzzing

DngDecoder: actually correctly handle defaulted white point

Without white point being optional this would've never been caught.

VC5Decompressor::createWaveletBandDecodingTasks(): iterate over reverse bands

LLVMOpenMP.cmake: disable all sanitizers

ASan, UBSan and MSan are all complaining.

oss-fuzz: do build with OpenMP

OpenMP structured blocks can't throw, and that invariant is generally not reflected in the code (unless done manually) when compiled without OpenMP, and that hides bugs (787695f19bba53c5186cd89ee383884e69d6a285 e.g.)

rawspeed_get_number_of_processor_cores() returns 1 for fuzzers, so there should not be any actual multi-threading done.

Actually allow using bundled libomp when no system libomp exists

find_package(OpenMP) doesn't work then.

VC5Decompressor: avoid OpenMP-specific heap-use-after-free

This is rather convoluted.

=================================================================
==360017==ERROR: AddressSanitizer: heap-use-after-free on address 0x5110000020c8 at pc 0x556f25741bed bp 0x7feae20da010 sp 0x7feae20da008
READ of size 8 at 0x5110000020c8 thread T27
 #0 0x556f25741bec in .omp_outlined..49 /home/lebedevri/rawspeed/src/librawspeed/decompressors/VC5Decompressor.cpp:349:7
 #1 0x556f25741bec in .omp_task_entry..50 /home/lebedevri/rawspeed/src/librawspeed/decompressors/VC5Decompressor.cpp:344:1
 #2 0x7feb04f37452 (/lib/x86_64-linux-gnu/libomp.so.5+0x68452) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)
 #3 0x7feb04f3b704 (/lib/x86_64-linux-gnu/libomp.so.5+0x6c704) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)
 #4 0x7feb04f4d174 (/lib/x86_64-linux-gnu/libomp.so.5+0x7e174) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)
 #5 0x7feb04f4775e (/lib/x86_64-linux-gnu/libomp.so.5+0x7875e) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)
 #6 0x7feb04f45adf (/lib/x86_64-linux-gnu/libomp.so.5+0x76adf) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)
 #7 0x7feb04efcd3f in __kmpc_barrier (/lib/x86_64-linux-gnu/libomp.so.5+0x2dd3f) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)
 #8 0x556f25754f5b in rawspeed::VC5Decompressor::decodeThread(bool&) const /home/lebedevri/rawspeed/src/librawspeed/decompressors/VC5Decompressor.cpp:838:19
 #9 0x7feb04faf002 in __kmp_invoke_microtask (/lib/x86_64-linux-gnu/libomp.so.5+0xe0002) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)
 #10 0x7feb04f183c8 (/lib/x86_64-linux-gnu/libomp.so.5+0x493c8) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)
 #11 0x7feb04f16955 (/lib/x86_64-linux-gnu/libomp.so.5+0x47955) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)
 #12 0x7feb04f88137 (/lib/x86_64-linux-gnu/libomp.so.5+0xb9137) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)
 #13 0x556f256e281e in asan_thread_start(void*) crtfastmath.c
 #14 0x7feb04c6945b in start_thread nptl/pthread_create.c:444:8
 #15 0x7feb04ce9bbb in clone3 misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

0x5110000020c8 is located 8 bytes inside of 240-byte region [0x5110000020c0,0x5110000021b0)
freed by thread T16 here:
 #0 0x556f25718a61 in operator delete(void*) (/home/lebedevri/rawspeed/build-Clang17-FUZZ/fuzz/librawspeed/decompressors/VC5DecompressorFuzzer+0x1b8a61) (BuildId: dd91d53719a432f2)
 #1 0x556f25741b3e in std::default_delete::operator()(rawspeed::VC5Decompressor::Wavelet::AbstractBand*) const /usr/bin/../lib/gcc/x86_64-linux-gnu/14/../../../../include/c++/14/bits/unique_ptr.h:93:2
 #2 0x556f25741b3e in std::unique_ptr>::~unique_ptr() /usr/bin/../lib/gcc/x86_64-linux-gnu/14/../../../../include/c++/14/bits/unique_ptr.h:398:4
 #3 0x556f25741b3e in void std::destroy_at>>(std::unique_ptr>*) /usr/bin/../lib/gcc/x86_64-linux-gnu/14/../../../../include/c++/14/bits/stl_construct.h:88:15
 #4 0x556f25741b3e in void std::_Destroy>>(std::unique_ptr>*) /usr/bin/../lib/gcc/x86_64-linux-gnu/14/../../../../include/c++/14/bits/stl_construct.h:149:7
 #5 0x556f25741b3e in void std::_Destroy_aux::__destroy>*>(std::unique_ptr>*, std::unique_ptr>*) /usr/bin/../lib/gcc/x86_64-linux-gnu/14/../../../../include/c++/14/bits/stl_construct.h:163:6
 #6 0x556f25741b3e in void std::_Destroy>*>(std::unique_ptr>*, std::unique_ptr>*) /usr/bin/../lib/gcc/x86_64-linux-gnu/14/../../../../include/c++/14/bits/stl_construct.h:195:7
 #7 0x556f25741b3e in void std::_Destroy>*, std::unique_ptr>>(std::unique_ptr>*, std::unique_ptr>*, std::allocator>>&) /usr/bin/../lib/gcc/x86_64-linux-gnu/14/../../../../include/c++/14/bits/alloc_traits.h:941:7
 #8 0x556f25741b3e in std::vector>, std::allocator>>>::_M_erase_at_end(std::unique_ptr>*) /usr/bin/../lib/gcc/x86_64-linux-gnu/14/../../../../include/c++/14/bits/stl_vector.h:1944:6
 #9 0x556f25741b3e in std::vector>, std::allocator>>>::clear() /usr/bin/../lib/gcc/x86_64-linux-gnu/14/../../../../include/c++/14/bits/stl_vector.h:1605:9
 #10 0x556f25741b3e in .omp_outlined..49 /home/lebedevri/rawspeed/src/librawspeed/decompressors/VC5Decompressor.cpp:349:21
 #11 0x556f25741b3e in .omp_task_entry..50 /home/lebedevri/rawspeed/src/librawspeed/decompressors/VC5Decompressor.cpp:344:1
 #12 0x7feb04f37452 (/lib/x86_64-linux-gnu/libomp.so.5+0x68452) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)
 #13 0x7feb04f3b704 (/lib/x86_64-linux-gnu/libomp.so.5+0x6c704) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)

previously allocated by thread T0 here:
 #0 0x556f257181e1 in operator new(unsigned long) (/home/lebedevri/rawspeed/build-Clang17-FUZZ/fuzz/librawspeed/decompressors/VC5DecompressorFuzzer+0x1b81e1) (BuildId: dd91d53719a432f2)
 #1 0x556f25750462 in std::__detail::_MakeUniq::__single_object std::make_unique(rawspeed::VC5Decompressor::Wavelet&, bool&, bool&) /usr/bin/../lib/gcc/x86_64-linux-gnu/14/../../../../include/c++/14/bits/unique_ptr.h:1076:30
 #2 0x556f25750462 in rawspeed::VC5Decompressor::parseLargeCodeblock(rawspeed::ByteStream) /home/lebedevri/rawspeed/src/librawspeed/decompressors/VC5Decompressor.cpp:810:28
 #3 0x556f25745535 in rawspeed::VC5Decompressor::parseVC5() /home/lebedevri/rawspeed/src/librawspeed/decompressors/VC5Decompressor.cpp:588:9
 #4 0x556f2574348c in rawspeed::VC5Decompressor::VC5Decompressor(rawspeed::ByteStream, rawspeed::RawImage const&) /home/lebedevri/rawspeed/src/librawspeed/decompressors/VC5Decompressor.cpp:431:3
 #5 0x556f2571af96 in LLVMFuzzerTestOneInput /home/lebedevri/rawspeed/fuzz/librawspeed/decompressors/VC5Decompressor.cpp:55:31
 #6 0x556f2562fe84 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) crtfastmath.c
 #7 0x556f25630849 in fuzzer::Fuzzer::TryDetectingAMemoryLeak(unsigned char const*, unsigned long, bool) crtfastmath.c
 #8 0x556f2561927e in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) crtfastmath.c
 #9 0x556f2561edd6 in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) crtfastmath.c
 #10 0x556f25648596 in main (/home/lebedevri/rawspeed/build-Clang17-FUZZ/fuzz/librawspeed/decompressors/VC5DecompressorFuzzer+0xe8596) (BuildId: dd91d53719a432f2)
 #11 0x7feb04c086c9 in __libc_start_call_main csu/../sysdeps/nptl/libc_start_call_main.h:58:16

Thread T27 created by T0 here:
 #0 0x556f256ca4c1 in pthread_create (/home/lebedevri/rawspeed/build-Clang17-FUZZ/fuzz/librawspeed/decompressors/VC5DecompressorFuzzer+0x16a4c1) (BuildId: dd91d53719a432f2)
 #1 0x7feb04f87737 (/lib/x86_64-linux-gnu/libomp.so.5+0xb8737) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)

Thread T16 created by T0 here:
 #0 0x556f256ca4c1 in pthread_create (/home/lebedevri/rawspeed/build-Clang17-FUZZ/fuzz/librawspeed/decompressors/VC5DecompressorFuzzer+0x16a4c1) (BuildId: dd91d53719a432f2)
 #1 0x7feb04f87737 (/lib/x86_64-linux-gnu/libomp.so.5+0xb8737) (BuildId: 49f63176f1578200567d2439fd1ca076a4a2c2b3)

SUMMARY: AddressSanitizer: heap-use-after-free /home/lebedevri/rawspeed/src/librawspeed/decompressors/VC5Decompressor.cpp:349:7 in .omp_outlined..49
Shadow bytes around the buggy address:
 0x511000001e00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
 0x511000001e80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 0x511000001f00: 00 00 00 00 00 00 fa fa fa fa fa fa fa fa fa fa
 0x511000001f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 0x511000002000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fa fa
=>0x511000002080: fa fa fa fa fa fa fa fa fd[fd]fd fd fd fd fd fd
 0x511000002100: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
 0x511000002180: fd fd fd fd fd fd fa fa fa fa fa fa fa fa fa fa
 0x511000002200: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 0x511000002280: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fa fa
 0x511000002300: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
 Addressable: 00
 Partially addressable: 01 02 03 04 05 06 07
 Heap left redzone: fa
 Freed heap region: fd
 Stack left redzone: f1
 Stack mid redzone: f2
 Stack right redzone: f3
 Stack after return: f5
 Stack use after scope: f8
 Global redzone: f9
 Global init order: f6
 Poisoned by user: f7
 Container overflow: fc
 Array cookie: ac
 Intra object redzone: bb
 ASan internal: fe
 Left alloca redzone: ca
 Right alloca redzone: cb
==360017==ABORTING

Add C interface for lfModifier::EnableDistortionCorrection (const lfLensCalibDistortion& lcd), addresses https://github.com/lensfun/lensfun/issues/502

I'm just a bit concerned about naming.

Cr2Decoder::decodeMetaDataInternal(): default white level to ljpeg sample precision

Revert "Merge pull request #631 from LebedevRI/fuzz-with-omp"

To make CI green. It is not yet obvious how this will be re-approached.

This reverts commit 3b5b63f1e52682822a825c51557b641642da7ad4, reversing changes made to 40f531f503e9e8988f5afe41f65723938d78b7d6.

Merge pull request #2158 from kmilos/patch-2

Add OM System OM-1 II

Codecov Report

Attention: 5 lines in your changes are missing coverage. Please review.

Comparison is base (dd50b78) 60.78% compared to head (f852acd) 60.77%.

Files Patch % Lines
src/librawspeed/decompressors/VC5Decompressor.cpp 33.33% 3 Missing and 1 partial :warning:
src/librawspeed/common/RawImage.cpp 50.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #640 +/- ##
===========================================
- Coverage 60.78% 60.77% -0.02% 
===========================================
 Files 266 266 
 Lines 15949 15952 +3 
 Branches 2047 2051 +4 
===========================================
 Hits 9695 9695 
- Misses 6125 6127 +2 
- Partials 129 130 +1 
Flag Coverage Δ
benchmarks 10.53% <0.00%> (-0.01%) :arrow_down:
integration 46.01% <50.00%> (-0.01%) :arrow_down:
linux 57.25% <50.00%> (-0.01%) :arrow_down:
macOS 24.28% <0.00%> (-0.02%) :arrow_down:
rpu_u 46.01% <50.00%> (-0.01%) :arrow_down:
unittests 21.43% <0.00%> (-0.01%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Alright, that is good enough for now.

DngDecoder: default white level for floats is simply 1

  1. We divide by white level to normalize the image s.t. the 1.0 becomes the white level.
  2. In DNG spec, white level is always an integer, there can not be a non-integral white level.
  3. Additionally, no white level for FP DNG's means that it is pre-normalized (default is 1.0). Therefore, we can surmise that a) FP DNG's never have white level of <1.0 b) FP DNG's always have integral white level c) if FP DNG has a white level of w, it is always correct to divide by float(w) to normalize the image ... therefore, we can, as an optimization, store FP DNG white level as an integer.

This matches DNG SDK behavior.

Runs on 'macos-14' explicitly to test arm64 architecture ('macos-latest' is still 'macos-12', which is on x86_64).

While you are at it could you also upgrade actions/setup-python@v4 to v5 in .github/workflows/cmake-windows.yml to fix the Node.js 16 deprecation warning?

Done.

You could also consider enabling automatic action updates: https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuring-dependabot-version-updates

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (c8d1192) 63.93% compared to head (9466b4c) 63.93%.

@@ Coverage Diff @@
## main #2910 +/- ##
=======================================
 Coverage 63.93% 63.93% 
=======================================
 Files 104 104 
 Lines 22400 22400 
 Branches 10877 10877 
=======================================
 Hits 14322 14322 
 Misses 5854 5854 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Hm, Conan is not picking up system brotli on Ubuntu for some reason (the version is 1.0.9), but I guess it's not a big deal...

Merge pull request #639 from kmilos/kmilos/fuji_xa20_u

Add Fujifilm X-A20 placeholder

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (65c98e1) 60.78% compared to head (4c815fe) 60.78%.

@@ Coverage Diff @@
## develop #639 +/- ##
========================================
 Coverage 60.78% 60.78% 
========================================
 Files 266 266 
 Lines 15947 15947 
 Branches 2047 2047 
========================================
 Hits 9694 9694 
 Misses 6125 6125 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.54% <ø> (ø)
integration 46.01% <ø> (ø)
linux 57.26% <ø> (ø)
macOS 24.30% <ø> (ø)
rpu_u 46.01% <ø> (ø)
unittests 21.44% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@kmilos thank you!

Merge pull request #638 from LebedevRI/unknown-white-level

Unknown white level

Optional: add value_or() helper

Signed-off-by: Roman Lebedev

DngDecoder/RawImageDataFloat::scaleBlackWhite(): no white level means 1.0F

RawImageDataFloat::scaleBlackWhite() is dead code, i bet nobody uses it.

Convert whitePoint to Optional

For now, just assume it is always set.

Merge pull request #637 from kmilos/patch-2

Add Sony ZV-1M2 placeholder

Duh, already had https://github.com/darktable-org/rawspeed/pull/488

As i've explained before, i'd rather merge this kind of thing than have https://github.com/darktable-org/rawspeed/pull/488 leave forever open.

Go for it.

PR's can't be reopened once the branch has been deleted.

Sorry 😔

@kmilos thank you!

  1. We divide by white level to normalize the image s.t. the 1.0 becomes the white level.
  2. In DNG spec, white level is always an integer, there can not be a non-integral white level.
  3. Additionally, no white level for FP DNG's means that it is pre-normalized (default is 1.0).

Therefore, we can surmise that a) FP DNG's never have white level of <1.0 b) FP DNG's always have integral white level c) if FP DNG has a white level of w, it is always correct to divide by float(w) to normalize the image ... therefore, we can, as an optimization, store FP DNG white level as an integer.

This matches DNG SDK behavior.

I don't think this is better.

I don't think this is better.

Better or not is not relevant - it is what the SDK does.

it is what the SDK does.

in addition to the SDK source code, this is also written in the standard: https://helpx.adobe.com/content/dam/help/en/photoshop/pdf/DNG_Spec_1_7_1_0.pdf#page=29

I'm fully aware of that, and none of that is a relevant argument here, because it dictates how the value is stored in DNG, it does not and can not dictate how the value is later stored in implementations.

The spec is somewhat ambiguous: says 1.0 but type is SHORT/LONG.

The SDK removes the ambiguity for me.

Spec says default is 1.0 for float, it doesn’t say it’s the only possible value.

I don't think this is better.

Better or not is not relevant - it is what the SDK does.

Does SDK really store 1.0f as (int)1 (0x1 hex)?

Yep, see the link.

Aha. But as i have said, i don't see how that is relevant here. It is still a magic constant. Why would we better off using that one over the one we use?

It's the correct default value 3rd parties could assume and rely on, just like 2^bps-1.

Is this a practical or theoretical concern?

For example, https://github.com/darktable-org/darktable/pull/16206 relies on either the float DNG file itself carries WhiteLevel=1, or rawspeed feeds it a 1 if absent.

Once again, the fact that SDK returns (int)1 as default white level for FP DNG's is a hack / workaround for the lack of std::variant for when that function was written.

We are not required to be bug-for-bug compatible with some other library. Returning 1 there is just as much as a magic number as returning 65535 or 65536.

There is absolutely no reason whatsoever why that idiotic behavior must now be mirrored in/by dt's img->raw_white_point. (just a add a boolean img->raw_is_prenormalized?)

DNG spec does not say that for FP DNG's, white level of (int)1 means that it is normalized.

Likewise, there's inversion of cause and effect ordering there. The current code does not rely on (int)1 being returned in such situations. Changing the magic constant breaks all other users (i should not have merged #635.) That patch will just have to rely on some other magic number.

Returning 1 there is just as much as a magic number as returning 65535 or 65536.

The DNG spec text says the default value for float files is 1.0, not 65535.0, nor 65536.0... I don't think there's anything arbitrary/magic here.

That's like saying one can return a random number in the integer case, while the spec says it's 2^bps-1?

Let's just remove both defaults then?

And yes, the previous PR was a bit hasty before the dust settled down, sorry about that.

Returning 1 there is just as much as a magic number as returning 65535 or 65536.

The DNG spec text says the default value for float files is 1.0, not 65535.0, nor 65536.0... I don't think there's anything arbitrary/magic here.

Once again, there is a confusion/conflation of the specification, which only dictates what some particular value of particular tag (or lack thereof) means, and how an implementation needs to further represent that knowledge.

I never argued that the default isn't 1.0, i only said that we can't represent float white level, and thus any representation of that fact as an integer results in a magic number that requires special treatment by consumers. Regardless of what that number is.

https://github.com/darktable-org/rawspeed/pull/638 either makes things slightly more sufferable, or worse.

638 either makes things slightly more sufferable, or worse.

I feel it's overcomplicating things...

After sleeping on this, the wordiage that should have been here, and would have helped, is:

1. we divide by white level to normalize the image s.t. the 1.0 becomes the white level
2. in dng, white level is always an integer, there can not be a non-integral white level.
3. additionally, no white level for FP DNG's means that it is pre-normalized.
therefore, we can surmise that
a) FP DNG's never have white level of <1.0
b) FP DNG's always have integral white level
c) if FP DNG has a white level of `w`, it is always correct to divide by float(w) to normalize the image
... therefore, we can, as an optimization, store FP DNG white level as an integer

One-line commit messages (and PR descriptions) suck. Please update the commit message with that wording.

But even with that, #638 is correct. Encoding the state "we don't actually know what the value is" via some magic numbers is just wrong.

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (db1b955) 60.78% compared to head (75ebaaa) 60.80%. Report is 17 commits behind head on develop.

Files Patch % Lines
src/librawspeed/decoders/DngDecoder.cpp 0.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #636 +/- ##
===========================================
+ Coverage 60.78% 60.80% +0.01% 
===========================================
 Files 266 266 
 Lines 15947 15947 
 Branches 2047 2047 
===========================================
+ Hits 9694 9696 +2 
+ Misses 6125 6123 -2 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.54% <0.00%> (ø)
integration 46.01% <0.00%> (ø)
linux 57.26% <0.00%> (ø)
macOS 24.30% <0.00%> (ø)
rpu_u 46.01% <0.00%> (ø)
unittests 21.44% <0.00%> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

But even with that, https://github.com/darktable-org/rawspeed/pull/638 is correct.

Formally, yes. But thanks for reconsidering, I feel it makes sense and is the pragmatic thing to do here.

I'll improve the commit message. Shall I put those comments in the code as well?

I'll improve the commit message. Shall I put those comments in the code as well?

Yes, i think something like this should be best:

// A white level of the image, if known.
// NOTE: it is always correct to divide the pixel by float(whiteLevel) to normalize the image.
// NOTE: for floating-point images, the white level is never non-integral, and thus >= 1.0f
Optional whiteLevel;

@kmilos Thank you!

Merge pull request #635 from kmilos/kmilos/dng_float_white

DngDecoder: use uninitialized white point value for float DNGs

E.g. HDRMerge will use 65535 for real 16b cameras, don't hijack it.

This is to go w/ https://github.com/darktable-org/darktable/pull/16206

~~TODO: WhiteLevel should probably become float to support RATIONAL type and default value 1.0 (same goes for BlackLevel).~~

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (0804b2b) 60.79% compared to head (f2ad598) 60.79%.

Files Patch % Lines
src/librawspeed/decoders/DngDecoder.cpp 0.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #635 +/- ##
========================================
 Coverage 60.79% 60.79% 
========================================
 Files 266 266 
 Lines 15941 15941 
 Branches 2044 2046 +2 
========================================
 Hits 9691 9691 
 Misses 6122 6122 
 Partials 128 128 
Flag Coverage Δ
benchmarks 10.55% <0.00%> (ø)
integration 46.01% <0.00%> (ø)
linux 57.28% <0.00%> (ø)
macOS 24.32% <0.00%> (ø)
rpu_u 46.01% <0.00%> (ø)
unittests 21.45% <0.00%> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Yeah i don't know, seems broken either way. Thank you.

Yeah i don't know, seems broken either way.

Yep, "damned if you do, damned if you don't" sort of thing... ~~(until full support for rationals is implemented.)~~

Actually, after taking a look around the SDK, this should literally be "1"...

DngDecoder: use uninitialized white point value for float DNGs

E.g. HDRMerge will use 65535 for real 16b cameras, don't hijack it.

Merge pull request #634 from LebedevRI/jpeg-overflow-revert

Re-add BitStreamerJPEG forward progress guarantee

Cr2Decompressor::decompressN_X_Y(): insist that we've advanced 0 bytes

This reverts commit 861923566436a9ceaa896dacbfb1c5209b46ad81.

Revert "BitStreamerJPEG::fillCache(): don't mark JPEG Marker bytes (and further) as consumed"

This reverts commit f47140a7a94d756deb4de411ce8f2df9f8b5422a.

As expected, f47140a7a94d756deb4de411ce8f2df9f8b5422a broke nearly all fuzzers. Performance is nice, but not at that cost. For now, just go back to the known-correct implementation, without reverting JPEG restart interval handling.

Merge pull request #631 from LebedevRI/fuzz-with-omp

oss-fuzz: do build with OpenMP

Merge pull request #633 from LebedevRI/ci

CI: macos-14 / XCode 15.2

Merge pull request #632 from kmilos/patch-1

Add OM System OM-1 Mark II placeholder

A missing space between different languages found today.

Quality Gate Passed Quality Gate passed

The SonarCloud Quality Gate passed, but some issues were introduced.

214 New issues 0 Security Hotspots No data about Coverage 2.2% Duplication on New Code

See analysis details on SonarCloud

OpenMP structured blocks can't throw, and that invariant is generally not reflected in the code (unless done manually) when compiled without OpenMP, and that hides bugs (787695f19bba53c5186cd89ee383884e69d6a285 e.g.)

rawspeed_get_number_of_processor_cores() returns 1 for fuzzers, so there should not be any actual multi-threading done.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (c52d923) 60.86% compared to head (3e00628) 60.86%. Report is 4 commits behind head on develop.

@@ Coverage Diff @@
## develop #631 +/- ##
========================================
 Coverage 60.86% 60.86% 
========================================
 Files 266 266 
 Lines 15941 15941 
 Branches 2053 2053 
========================================
 Hits 9702 9702 
 Misses 6108 6108 
 Partials 131 131 
Flag Coverage Δ
benchmarks 10.64% <ø> (ø)
integration 46.07% <ø> (ø)
linux 57.31% <ø> (ø)
macOS 24.16% <ø> (ø)
rpu_u 46.07% <ø> (ø)
unittests 21.54% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #630 from LebedevRI/next

Be more correct about exceptions and omp structured blocks

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 18

Star 563

Actions: aurelienpierreeng/ansel

Actions

All workflows

Loading...

Sorry, something went wrong.

Showing runs from all workflows

2,463 workflow runs

2,463 workflow runs

Could not load branches

Nothing to show

Nightly Win PKG

Nightly Win PKG #449:

Scheduled

January 29, 2024 01:03

18m 41s

master

master

January 29, 2024 01:03

18m 41s

View workflow file

Nightly Linux PKG

Nightly Linux PKG #486:

Scheduled

January 29, 2024 00:36

14m 32s

master

master

January 29, 2024 00:36

14m 32s

View workflow file

Nightly Mac PKG

Nightly Mac PKG #415:

Scheduled

January 29, 2024 00:29

1m 32s

master

master

January 29, 2024 00:29

1m 32s

View workflow file

Crash on import-Copy to disk

Matrix bot #616:

Issue #320

edited by elidigui

January 28, 2024 10:26

21s

January 28, 2024 10:26

21s

View workflow file

Crash on import-Copy to disk

Matrix bot #615:

Issue #320

opened by elidigui

January 28, 2024 08:55

24s

January 28, 2024 08:55

24s

View workflow file

Nightly Win PKG

Nightly Win PKG #448:

Scheduled

January 28, 2024 01:07

19m 12s

master

master

January 28, 2024 01:07

19m 12s

View workflow file

Nightly Linux PKG

Nightly Linux PKG #485:

Scheduled

January 28, 2024 00:39

14m 0s

master

master

January 28, 2024 00:39

14m 0s

View workflow file

Nightly Mac PKG

Nightly Mac PKG #414:

Scheduled

January 28, 2024 00:32

55s

master

master

January 28, 2024 00:32

55s

View workflow file

Nightly Win PKG

Nightly Win PKG #447:

Scheduled

January 27, 2024 01:01

19m 37s

master

master

January 27, 2024 01:01

19m 37s

View workflow file

Nightly Linux PKG

Nightly Linux PKG #484:

Scheduled

January 27, 2024 00:36

13m 46s

master

master

January 27, 2024 00:36

13m 46s

View workflow file

Nightly Mac PKG

Nightly Mac PKG #413:

Scheduled

January 27, 2024 00:29

1m 6s

master

master

January 27, 2024 00:29

1m 6s

View workflow file

Nightly Win PKG

Nightly Win PKG #446:

Scheduled

January 26, 2024 01:03

19m 41s

master

master

January 26, 2024 01:03

19m 41s

View workflow file

Nightly Linux PKG

Nightly Linux PKG #483:

Scheduled

January 26, 2024 00:36

14m 22s

master

master

January 26, 2024 00:36

14m 22s

View workflow file

Nightly Mac PKG

Nightly Mac PKG #412:

Scheduled

January 26, 2024 00:30

1m 6s

master

master

January 26, 2024 00:30

1m 6s

View workflow file

Nightly Win PKG

Nightly Win PKG #445:

Scheduled

January 25, 2024 01:08

19m 13s

master

master

January 25, 2024 01:08

19m 13s

View workflow file

Nightly Linux PKG

Nightly Linux PKG #482:

Scheduled

January 25, 2024 00:39

13m 34s

master

master

January 25, 2024 00:39

13m 34s

View workflow file

Nightly Mac PKG

Nightly Mac PKG #411:

Scheduled

January 25, 2024 00:32

1m 23s

master

master

January 25, 2024 00:32

1m 23s

View workflow file

Nightly Win PKG

Nightly Win PKG #444:

Scheduled

January 24, 2024 01:08

19m 7s

master

master

January 24, 2024 01:08

19m 7s

View workflow file

Nightly Linux PKG

Nightly Linux PKG #481:

Scheduled

January 24, 2024 00:39

14m 43s

master

master

January 24, 2024 00:39

14m 43s

View workflow file

Nightly Mac PKG

Nightly Mac PKG #410:

Scheduled

January 24, 2024 00:32

58s

master

master

January 24, 2024 00:32

58s

View workflow file

Nightly Win PKG

Nightly Win PKG #443:

Scheduled

January 23, 2024 01:08

19m 16s

master

master

January 23, 2024 01:08

19m 16s

View workflow file

Nightly Linux PKG

Nightly Linux PKG #480:

Scheduled

January 23, 2024 00:39

14m 1s

master

master

January 23, 2024 00:39

14m 1s

View workflow file

Nightly Mac PKG

Nightly Mac PKG #409:

Scheduled

January 23, 2024 00:32

1m 6s

master

master

January 23, 2024 00:32

1m 6s

View workflow file

Nightly Win PKG

Nightly Win PKG #442:

Scheduled

January 22, 2024 01:10

19m 14s

master

master

January 22, 2024 01:10

19m 14s

View workflow file

Nightly Linux PKG

Nightly Linux PKG #479:

Scheduled

January 22, 2024 00:41

13m 5s

master

master

January 22, 2024 00:41

13m 5s

View workflow file

Previous 1 2 3 4 5 … 98 99 Next

You can’t perform that action at this time.

SonyArw2Decompressor::decompressThread(): no other exception types are expected

PhaseOneDecompressor::decompressThread(): no other exception types are expected

FujiDecompressorImpl::decompressThread(): no other exception types are expected

AbstractDngDecompressor::decompressThread(): no other exception types are expected

Standardized file supported by many editors to avoid bad indentation.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (c8d1192) 63.93% compared to head (92a41df) 63.93%.

@@ Coverage Diff @@
## main #2909 +/- ##
=======================================
 Coverage 63.93% 63.93% 
=======================================
 Files 104 104 
 Lines 22400 22400 
 Branches 10877 10877 
=======================================
 Hits 14322 14322 
 Misses 5854 5854 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge remote-tracking branch 'upstream/pr/629' into develop

Fixes https://github.com/darktable-org/rawspeed/issues/403 Refs. https://github.com/darktable-org/darktable/issues/14969 Refs. https://github.com/darktable-org/darktable/issues/15480 Refs. https://github.com/darktable-org/darktable/issues/13782

Does not affect the performance of the non-restart-interval path.

  • upstream/pr/629: LJpegDecompressor::decodeN(): do proper restart marker checking LJpegDecompressor::decodeN(): actually skip restart marker, restart intervals are supported LJpegDecompressor::decodeN(): use ByteStream LJpegDecompressor::decodeN(): add new loop over each restart interval LJpeg: plumb numRowsPerRestartInterval into LJpegDecompressor, not supported yet. LJpeg: sink restart interval refutation into individual decoders LJpegDecoder::decodeScan(): extract some helper variables LJpegDecompressor: extract decodeRowN() out of decodeN() LJpegDecompressor: decode*() can be const LJpegDecompressor: get row-ref and operate on it rather than 2d-ref LJpegDecompressor: track predNext via a proper Array1DRef, not pointer LJpegDecompressor: s/bitStreamer/bs/ JpegMarkers.h: add getRestartMarkerNumber() helper Extract peekMarker into JpegMarkers.h Extract JpegMarker into it's own header BitStreamer: add skipBits() (with fill), for symmetry with the rest of methods ArwDecoder::DecodeLJpeg(): LJpegDecoder may throw, which is not okay here

LJpegDecompressor::decodeN(): add new loop over each restart interval

Benchmarking shows no performance change for the normal, non-restart-interval case, so we're good.

LJpegDecompressor::decodeN(): actually skip restart marker, restart intervals are supported

And thats the magic - the actual byte stream for all of the MCU's of the restart interval is followed by a JPEG Marker with the restart interval's counter (modulo 8) (except for the last restart interval), and we need to skip it to get to the bytes for the MCU's of the next restart interval.

Again, performance-neutral.

LJpeg: plumb numRowsPerRestartInterval into LJpegDecompressor, not supported yet.

LJpegDecompressor::decodeN(): do proper restart marker checking

We shouldn't really blindly hope that those two bytes are what we hope they are, we should verify that they are a restart marker, with correct number (modulo 8).

I'm not sure if there are cases where we don't exhaust the entirety of the input buffer beforehand, as in, may there be padding bytes before said restart marker?

Again, performance-neutral for the normal case.

LJpegDecompressor::decodeN(): use ByteStream

While a simple counter is enough right now, proper handling of restart intervals will require actually reading from the position in the buffer, and that requires actual run-time bounds checking, so we'll need ByteStream.

This, also, does not seem to affect performance of the current cases.

LJpegDecompressor: extract decodeRowN() out of decodeN()

Performance-neutral. The story may be different is ht is sunk.

ArwDecoder::DecodeLJpeg(): LJpegDecoder may throw, which is not okay here

We are in OpenMP thread, exceptions can't escape it.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 18

Star 563

Insights: aurelienpierreeng/ansel

January 22, 2024 – January 29, 2024

Filter activity

24 hours

3 days

1 week

1 month

Overview

0 Active pull requests

1 Active issue

0

Merged pull requests

0

Open pull requests

0

Closed issues

1

New issue

There hasn’t been any commit activity on aurelienpierreeng/ansel in the last week.

Want to help out?

1 Issue opened by 1 person

Crash on import-Copy to disk

320 opened Jan 28, 2024

2 Unresolved conversations

Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.

UI Scopes module

293 commented on Jan 26, 2024 • 1 new comment

Zooming messes with pipeline (preview in Darkroom not stable)

296 commented on Jan 27, 2024 • 1 new comment

You can’t perform that action at this time.

Description of the bug

When I want to copy photos from a directory to another and when I configure the destination folder name with exif.year parameter, it crashes.

To Reproduce 1-Fichier/Import/Copy to disk 2-Source: C:\Vladicobulk\non\Pictures\import 3-Dest (Dossier de base de tout les projets): C:\Vladicobulk\non\Pictures\Photo 4-Motif de nommage du dossier projet: $(EXIF.YEAR)$(EXIF.MONTH) 5-Motif de nommage de fichiers: $(FILE.NAME).$(FILE.EXTENSION) 6-Import 7-error: "An unhandled expression occured..."

Expected behavior

It would copy toto.CR2 from C:\Vladicobulk\non\Pictures\import to C:\Vladicobulk\non\Pictures\Photo\202401\toto.CR2 by creating the dir 202401

Context

Screenshots NOP Screencast NOP

Which commit introduced the error

Don't know

System

  • ansel version : 0.0.0+729~ge2c4a0a
  • OS : 23H2 22631.3007

Additional context The file created during the crash is ansel_bt_82BGI2.txt: """""""""""""" this is 0.0.0+729~ge2c4a0a reporting an exception:


Error occurred on Sunday, January 28, 2024 at 09:45:15.

ansel.exe caused an Access Violation at location 00007FFDA38CC1C3 in module libansel.dll Reading from location 0000000000000000.

AddrPC Params 00007FFDA38CC1C3 00007FFDA3C630C0 00000251FCFF1070 00000251FDB0FE70 libansel.dll!dt_dev_load_image+0x173 00007FFDF93BB591 00007FFDA3998E10 00000251E6C61810 0000000000000000 libdarkroom.dll!try_enter+0x31 00007FFDA39A27C6 00000000FFFFFFFF 000000DC191FF0D8 00007FFDA3BB8DA0 libansel.dll!dt_view_manager_switch_by_view+0x86 00007FFDA3873130 0000000000000000 0000000000000000 0000000000000000 libansel.dll!dt_conf_cleanup+0x1e0 00007FFDE29032F5 0000000000000001 00007FFDA3C630C0 000002521808C200 libglib-2.0-0.dll!g_main_context_invoke_full+0x65 00007FFDE29034FA 0000000000000002 00007FFDA337AEF4 0000000000000000 libglib-2.0-0.dll!g_main_context_invoke+0x1a 00007FFDA3817369 00000251FD986F30 00000251E7651CD0 0000000000000001 libansel.dll!dt_imageio_tiff_read_profile+0x4d9 00007FFDEF344C91 00007FFDA38170E0 0000000000000000 000000DC191FF3B0 libffi-8.dll!ffi_call_win64+0x41 00007FFDEF344897 000000000000003F 0000000200020004 0000000000000005 libffi-8.dll!ffi_tramp_free+0x167 00007FFDEF344A72 00000251FD8D2CC0 000000DC191FF4F0 0000000000000014 libffi-8.dll!ffi_call+0x12 00007FFDEA718966 0000000000000000 0000000000000000 00007FFDEA71AD00 libgobject-2.0-0.dll!g_cclosure_marshal_generic+0x246 00007FFDEA718124 0000000000000000 0000000000000000 00000251FD69A830 libgobject-2.0-0.dll!g_closure_invoke+0x134 00007FFDEA72B2F7 00007FFE00000000 00007FFE00000000 00007FFDA5DEC750 libgobject-2.0-0.dll!g_param_spec_variant+0x2007 00007FFDEA731F6B 00000251E762CFC0 AF9300A3E4BC8A41 00000251FCFF1070 libgobject-2.0-0.dll!g_signal_emitv+0x1bb 00007FFDA3882701 0000000000000000 0000000000000000 0000000000000000 libansel.dll!dt_control_progress_cancellable+0xd1 00007FFDE28FEB5C 0000000000000002 00000251FCFF1070 0000000000000000 libglib-2.0-0.dll!g_clear_list+0x11ac 00007FFDE2901983 0000000000000000 00007FFDE2923E26 0000000000000000 libglib-2.0-0.dll!g_get_monotonic_time+0xa73 00007FFDE2902320 0000000000000000 0000000000000000 00007FFDA3C630C0 libglib-2.0-0.dll!g_main_loop_run+0x120 00007FFDA318BB5E 0000000000000000 00007FF74EE2406A 00007FF74EE24028 libgtk-3-0.dll!gtk_main+0x7e 00007FFDA396B46F 0002000000000001 00000251FB605040 0000000000000001 libansel.dll!dt_gui_gtk_run+0xaf 00007FF74EE22D09 00007FF74EE21560 00007FF74EE2277D 00007FFE14DA0F28 ansel.exe!0x2d09 00007FF74EE214C2 0000000000000000 00007FF74EE27048 0000000000000000 ansel.exe!0x14c2 00007FF74EE212F7 0000000000000000 0000000000000000 0000000000000000 ansel.exe!0x12f7 00007FF74EE21406 0000000000000000 0000000000000000 0000000000000000 ansel.exe!0x1406 00007FFE16FC257D 0000000000000000 0000000000000000 0000000000000000 KERNEL32.DLL!BaseThreadInitThunk+0x1d 00007FFE177AAA58 0000000000000000 0000000000000000 0000000000000000 ntdll.dll!RtlUserThreadStart+0x28

00007FF74EE20000-00007FF74EE3E000 ansel.exe 0.0.0.729 00007FFE17750000-00007FFE17967000 ntdll.dll 10.0.22621.2506 00007FFE16FB0000-00007FFE17074000 KERNEL32.DLL 10.0.22621.2506 00007FFE15000000-00007FFE153A6000 KERNELBASE.dll 10.0.22621.2792 00007FFE14CA0000-00007FFE14DB1000 ucrtbase.dll 10.0.22621.2506 00007FFDF8E30000-00007FFDF8E65000 libintl-8.dll 0.22.4.0 00007FFE16EF0000-00007FFE16FA3000 ADVAPI32.dll 10.0.22621.3007 00007FFE16890000-00007FFE16937000 msvcrt.dll 7.0.22621.2506 00007FFE17230000-00007FFE172D8000 sechost.dll 10.0.22621.3007 00007FFDE28C0000-00007FFDE2A26000 libglib-2.0-0.dll 2.78.3.0 00007FFE14E60000-00007FFE14E88000 bcrypt.dll 10.0.22621.2506 00007FFE16D30000-00007FFE16ED0000 ole32.dll 10.0.22621.2506 00007FFE154A0000-00007FFE155B7000 RPCRT4.dll 10.0.22621.2792 00007FFE14DC0000-00007FFE14E5A000 msvcp_win.dll 10.0.22621.2506 00007FFE17480000-00007FFE174A9000 GDI32.dll 10.0.22621.2792 00007FFE14B50000-00007FFE14B76000 win32u.dll 10.0.22621.3007 00007FFE14B80000-00007FFE14C98000 gdi32full.dll 10.0.22621.2861 00007FFDB9BE0000-00007FFDB9CF9000 libiconv-2.dll 1.17.0.0 00007FFE17080000-00007FFE1722E000 USER32.dll 10.0.22621.2506 00007FFE15F30000-00007FFE162B9000 combase.dll 10.0.22621.2792 00007FFE156D0000-00007FFE15F2A000 SHELL32.dll 10.0.22621.3007 00007FFE174B0000-00007FFE17521000 WS2_32.dll 10.0.22621.1 00007FFDF1690000-00007FFDF16F8000 libpcre2-8-0.dll 00007FFDA3770000-00007FFDA3D0A000 libansel.dll 00007FFE05750000-00007FFE05774000 libgcc_s_seh-1.dll 00007FFDF1740000-00007FFDF1796000 exchndl.dll 0.9.11.0 00007FFDB3E30000-00007FFDB3F5C000 libcairo-2.dll 00007FFDEDB80000-00007FFDEDBB1000 libgdk_pixbuf-2.0-0.dll 2.42.10.0 00007FFDA5D20000-00007FFDA5E68000 libgdk-3-0.dll 3.24.39.0 00007FFE175D0000-00007FFE17601000 IMM32.dll 10.0.22621.2792 00007FFDA58A0000-00007FFDA5A59000 libgio-2.0-0.dll 2.78.3.0 00007FFE16410000-00007FFE16884000 SETUPAPI.dll 10.0.22621.2506 00007FFE162C0000-00007FFE1631E000 SHLWAPI.dll 10.0.22621.2506 00007FFDEA710000-00007FFDEA76A000 libgobject-2.0-0.dll 2.78.3.0 00007FFDE8550000-00007FFDE85B5000 libpango-1.0-0.dll 1.50.14.0 00007FFE0CDA0000-00007FFE0CDB8000 libpangocairo-1.0-0.dll 1.50.14.0 00007FFE08FD0000-00007FFE08FE7000 libwinpthread-1.dll 1.0.0.0 00007FFDF9750000-00007FFDF976F000 zlib1.dll 00007FFDA2DE0000-00007FFDA3010000 libstdc++-6.dll 00007FFDEDB40000-00007FFDEDB76000 libavif-16.dll 00007FFDD9910000-00007FFDD99DC000 libcurl-4.dll 8.5.0.0 00007FFDA3010000-00007FFDA376F000 libgtk-3-0.dll 3.24.39.0 00007FFE14E90000-00007FFE14FF6000 CRYPT32.dll 10.0.22621.2506 00007FFE17360000-00007FFE173C2000 WLDAP32.dll 10.0.22621.2506 00007FFE0EE90000-00007FFE0EEA0000 libgmodule-2.0-0.dll 2.78.3.0 00007FFDA2AB0000-00007FFDA2DD7000 libexiv2.dll 00007FFE156C0000-00007FFE156C8000 PSAPI.DLL 10.0.22621.1 00007FFDEA950000-00007FFDEA99F000 libgomp-1.dll 00007FFE155C0000-00007FFE156B9000 comdlg32.dll 10.0.22621.2506 00007FFDA5BE0000-00007FFDA5D1C000 libGraphicsMagick-3.dll 00007FFE17610000-00007FFE17703000 shcore.dll 10.0.22621.2715 00007FFDC0C90000-00007FFDC0D81000 libheif.dll 00007FFDA2610000-00007FFDA27D6000 libicuuc74.dll 00007FFDB3D30000-00007FFDB3E2C000 libjpeg-8.dll 00007FFDA27E0000-00007FFDA2AAE000 libicuin74.dll 00007FFDF9130000-00007FFDF915E000 libjson-glib-1.0-0.dll 00007FFDA2300000-00007FFDA260A000 libOpenEXR-3_2.dll 00007FFDE6510000-00007FFDE657F000 liblcms2-2.dll 00007FFDE6280000-00007FFDE62F7000 libopenjp2-7.dll 00007FFDEA1E0000-00007FFDEA223000 libpng16-16.dll 00007FFDE6070000-00007FFDE60B3000 libpugixml.dll 1.14.0.0 00007FFDE24A0000-00007FFDE24F9000 libsecret-1-0.dll 00007FFDDE340000-00007FFDDE3D5000 libsoup-2.4-1.dll 00007FFDDAA40000-00007FFDDAAD3000 libtiff-6.dll 00007FFDA1A60000-00007FFDA22FE000 librsvg-2-2.dll 00007FFDA18D0000-00007FFDA1A52000 libsqlite3-0.dll 00007FFDF6000000-00007FFDF6014000 libwebpmux-3.dll 00007FFDD3190000-00007FFDD324F000 libwebp-7.dll 00007FFDA1780000-00007FFDA18C3000 libxml2-2.dll 00007FFDC0680000-00007FFDC0741000 mgwhelp.dll 0.9.11.0 00007FFE0D010000-00007FFE0D01A000 VERSION.dll 10.0.22621.1 00007FFDF8730000-00007FFDF8737000 MSIMG32.dll 10.0.22621.2506 00007FFDBD960000-00007FFDBDA24000 libfreetype-6.dll 2.13.2.0 00007FFDDE1E0000-00007FFDDE237000 libfontconfig-1.dll 00007FFDC0A00000-00007FFDC0AAF000 libpixman-1-0.dll 00007FFE11EF0000-00007FFE11F1B000 dwmapi.dll 10.0.22621.2506 00007FFDFC200000-00007FFDFC3B9000 gdiplus.dll 10.0.22621.2506 00007FFE13290000-00007FFE1329E000 HID.DLL 10.0.22621.1 00007FFE134F0000-00007FFE1351D000 IPHLPAPI.DLL 10.0.22621.1 00007FFE13560000-00007FFE13659000 DNSAPI.dll 10.0.22621.2506 00007FFE0DCA0000-00007FFE0DCD4000 WINMM.dll 10.0.22621.2506 00007FFDF4850000-00007FFDF4861000 libcairo-gobject-2.dll 00007FFDF2020000-00007FFDF204C000 libfribidi-0.dll 00007FFDA15D0000-00007FFDA177F000 libepoxy-0.dll 00007FFDF4320000-00007FFDF433F000 libpangowin32-1.0-0.dll 1.50.14.0 00007FFDEF340000-00007FFDEF351000 libffi-8.dll 00007FFDA1490000-00007FFDA15CC000 libharfbuzz-0.dll 00007FFDEEDD0000-00007FFDEEDE8000 libthai-0.dll 00007FFDED910000-00007FFDED92C000 libpangoft2-1.0-0.dll 1.50.14.0 00007FFDA09A0000-00007FFDA0B96000 libdav1d-7.dll 7.0.0.0 00007FFDA0630000-00007FFDA0995000 rav1e.dll 00007FFDECD90000-00007FFDECDA6000 libsharpyuv-0.dll 00007FFDA0BA0000-00007FFDA1484000 libaom.dll 00007FFDA5790000-00007FFDA5891000 libyuv.dll 00007FFDEBF10000-00007FFDEBF27000 libbrotlidec.dll 00007FFD9FE40000-00007FFDA0623000 libSvtAv1Enc.dll 00007FFDE2870000-00007FFDE28B2000 libidn2-0.dll 00007FFD9F960000-00007FFD9FE31000 libcrypto-3-x64.dll 3.2.0.0 00007FFDEA3A0000-00007FFDEA3C2000 libpsl-5.dll 00007FFDEA6D0000-00007FFDEA707000 libnghttp2-14.dll 1.58.0.0 00007FFDE23B0000-00007FFDE23FE000 libssh2-1.dll 1.11.0.0 00007FFD9F870000-00007FFD9F954000 libssl-3-x64.dll 3.2.0.0 00007FFD9F750000-00007FFD9F86E000 libzstd.dll 00007FFE07640000-00007FFE078D3000 COMCTL32.dll 6.10.22621.2506 00007FFDEBEF0000-00007FFDEBF0A000 libbz2-1.dll 00007FFDE0B90000-00007FFDE0BC4000 libexpat-1.dll 00007FFDEAED0000-00007FFDEAEE8000 libltdl-7.dll 00007FFDBDEA0000-00007FFDBDF5A000 libde265-0.dll 00007FFDF05A0000-00007FFDF0648000 WINSPOOL.DRV 10.0.22621.2506 00007FFD9E3F0000-00007FFD9F742000 libx265.dll 3.4.0.31 00007FFDE8920000-00007FFDE894E000 libatk-1.0-0.dll 2.50.0.0 00007FFDDD330000-00007FFDDD388000 libImath-3_1.dll 00007FFDEA440000-00007FFDEA452000 libIlmThread-3_2.dll 00007FFDDA980000-00007FFDDA9E0000 libIex-3_2.dll 00007FFD9C440000-00007FFD9C67E000 libOpenEXRCore-3_2.dll 00007FFDEA040000-00007FFDEA05C000 libdeflate.dll 00007FFD9C2F0000-00007FFD9C436000 libgcrypt-20.dll 1.10.3.0 00007FFDE99C0000-00007FFDE99D7000 libjbig-0.dll 00007FFDB8BF0000-00007FFDB8CB1000 libLerc.dll 00007FFDDE2A0000-00007FFDDE2D5000 liblzma-5.dll 5.4.5.0 00007FFE13F90000-00007FFE13FBC000 USERENV.dll 10.0.22621.2506 00007FFE12490000-00007FFE126C3000 dbghelp.dll 10.0.22621.2506 00007FFE16330000-00007FFE16407000 OLEAUT32.dll 10.0.22621.2506 00007FFE104E0000-00007FFE10753000 DWrite.dll 10.0.22621.2506 00007FFDE0020000-00007FFDE0039000 USP10.dll 10.0.22621.1 00007FFDE61C0000-00007FFDE61EC000 libgraphite2.dll 00007FFE0ED20000-00007FFE0ED30000 libdatrie-1.dll 00007FFDE2470000-00007FFDE249D000 libbrotlicommon.dll 00007FFD9C100000-00007FFD9C2ED000 libunistring-5.dll 1.1.0.0 00007FFD9C680000-00007FFD9E3E9000 libicudt74.dll 00007FFDD9670000-00007FFDD96AC000 libgpg-error-0.dll 1.47.0.0 00007FFE09700000-00007FFE09732000 dbgcore.DLL 10.0.22621.1 00007FFE14220000-00007FFE1422C000 CRYPTBASE.DLL 10.0.22621.1 00007FFE15420000-00007FFE1549A000 bcryptPrimitives.dll 10.0.22621.2506 00007FFE16B00000-00007FFE16B09000 NSI.dll 10.0.22621.1 00007FFE13A20000-00007FFE13A38000 kernel.appcore.dll 10.0.22621.2715 00007FFE12990000-00007FFE13286000 windows.storage.dll 10.0.22621.2792 00007FFE12850000-00007FFE1298E000 wintypes.dll 10.0.22621.2792 00007FFE0F040000-00007FFE0F177000 winhttp.dll 10.0.22621.2506 00007FFDEEC90000-00007FFDEEC96000 KBDFR.DLL 10.0.22621.1 00007FFE11C70000-00007FFE11D1B000 uxtheme.dll 10.0.22621.3007 00007FFE16990000-00007FFE16AE0000 MSCTF.dll 10.0.22621.2792 00007FFE14660000-00007FFE1468C000 DEVOBJ.dll 10.0.22621.2506 00007FFE14690000-00007FFE146DE000 cfgmgr32.dll 10.0.22621.2506 00007FFE153B0000-00007FFE1541B000 WINTRUST.dll 10.0.22621.3007 00007FFE14290000-00007FFE142A2000 MSASN1.dll 10.0.22621.2506 00007FFE173D0000-00007FFE17480000 clbcatq.dll 2001.12.10941.16384 00007FFDD5F80000-00007FFDD5FFE000 OpenCL.dll 3.0.1.0 00007FFE11D80000-00007FFE11DB6000 dxcore.dll 10.0.22621.2506 00007FFE095C0000-00007FFE096F5000 AppXDeploymentClient.dll 10.0.22621.2792 00007FFD9B380000-00007FFD9C0FA000 amdocl64.dll 31.0.21024.5005 00007FFDE2AF0000-00007FFDE2BF0000 OPENGL32.dll 10.0.22621.2506 00007FFE05970000-00007FFE0599D000 GLU32.dll 10.0.22621.2506 00007FFD97140000-00007FFD9B37E000 amdhsail64.dll 00007FFD75500000-00007FFD7BA46000 amd_comgr.dll 00007FFE149C0000-00007FFE149E6000 profapi.dll 10.0.22621.2506 00007FFDE5FF0000-00007FFDE5FF9000 IconCodecService.dll 10.0.22621.1 00007FFE10330000-00007FFE104E0000 WindowsCodecs.dll 10.0.22621.2506 00007FFDF1460000-00007FFDF155C000 Windows.ApplicationModel.dll 10.0.22621.2506 00007FFDEF690000-00007FFDEF77B000 Windows.StateRepositoryPS.dll 10.0.22621.2792 00007FFE0CD40000-00007FFE0CD63000 Windows.StateRepositoryBroker.dll 10.0.22621.2792 00007FFE0F3C0000-00007FFE0F4C1000 propsys.dll 7.0.22621.2506 00007FFDEBAD0000-00007FFDEBAF6000 mssprxy.dll 7.0.22621.2792 00007FFDFD400000-00007FFDFD518000 mrmcorer.dll 10.0.22621.2506 00007FFE06940000-00007FFE06BFC000 iertutil.dll 11.0.22621.3007 00007FFE0D050000-00007FFE0D06A000 windows.staterepositorycore.dll 10.0.22621.2792 00007FFE08030000-00007FFE0806D000 windows.staterepositoryclient.dll 10.0.22621.2792 00007FFDFC160000-00007FFDFC192000 bcp47mrm.dll 10.0.22621.2506 00007FFDFD050000-00007FFDFD1C4000 Windows.UI.dll 10.0.22621.2506 00007FFE0FAA0000-00007FFE0FAB0000 libpixbufloader-png.dll 00007FFDE11B0000-00007FFDE126C000 mscms.dll 10.0.22621.2506 00007FFDF17A0000-00007FFDF17E9000 icm32.dll 10.0.22621.2506 00007FFDF61F0000-00007FFDF633A000 textinputframework.dll 10.0.22621.2792 00007FFE11620000-00007FFE11754000 CoreMessaging.dll 10.0.22621.3007 00007FFE0DFE0000-00007FFE0E34C000 CoreUIComponents.dll 10.0.22621.2506 00007FFDF93B0000-00007FFDF93D2000 libdarkroom.dll 00007FFE0EF40000-00007FFE0EF4F000 liblighttable.dll 00007FFE09070000-00007FFE0907F000 libavif.dll 00007FFE09050000-00007FFE0905D000 libcopy.dll 00007FFDF9390000-00007FFDF93A6000 libexr.dll 00007FFE09030000-00007FFE0903E000 libj2k.dll 00007FFE08F20000-00007FFE08F2E000 libjpeg.dll 00007FFE084C0000-00007FFE084D0000 libpdf.dll 00007FFE060A0000-00007FFE060AD000 libpfm.dll 00007FFE05B90000-00007FFE05B9F000 libpng.dll 00007FFDFD810000-00007FFDFD81D000 libppm.dll 00007FFDFD570000-00007FFDFD57F000 libtiff.dll 00007FFDF9F50000-00007FFDF9F5F000 libwebp.dll 00007FFDF48B0000-00007FFDF48C3000 libxcf.dll 00007FFDF96E0000-00007FFDF96EF000 libdisk.dll 00007FFDF8EE0000-00007FFDF8EF0000 libgallery.dll 00007FFDF4890000-00007FFDF48A1000 libpiwigo.dll 00007FFDF1830000-00007FFDF1858000 libashift.dll 00007FFDF4870000-00007FFDF4889000 libatrous.dll 00007FFDF1810000-00007FFDF1830000 libbasecurve.dll 00007FFDF17F0000-00007FFDF1806000 libbasicadj.dll 00007FFDF5FF0000-00007FFDF5FFF000 libbilat.dll 00007FFDE64F0000-00007FFDE6501000 libbilateral.dll 00007FFDF19F0000-00007FFDF19FE000 libbloom.dll 00007FFDE6260000-00007FFDE6271000 libblurs.dll 00007FFDE6050000-00007FFDE6061000 libborders.dll 00007FFDE5250000-00007FFDE5266000 libcacorrect.dll 00007FFDF1680000-00007FFDF1690000 libcacorrectrgb.dll 00007FFDEF5D0000-00007FFDEF5DE000 libcensorize.dll 00007FFDEF330000-00007FFDEF340000 libchannelmixer.dll 00007FFDDE490000-00007FFDDE4BD000 libchannelmixerrgb.dll 00007FFDECD80000-00007FFDECD8E000 libclahe.dll 00007FFDE3130000-00007FFDE314C000 libclipping.dll 00007FFDEA1D0000-00007FFDEA1DE000 libcolisa.dll 00007FFDDDD30000-00007FFDDDD4B000 libcolorbalance.dll 00007FFDDD310000-00007FFDDD32B000 libcolorbalancergb.dll 00007FFDDAC30000-00007FFDDAC4B000 libcolorchecker.dll 00007FFDEA100000-00007FFDEA10E000 libcolorcontrast.dll 00007FFDE9C30000-00007FFDE9C40000 libcolorcorrection.dll 00007FFDDAA20000-00007FFDDAA39000 libcolorin.dll 00007FFDE9540000-00007FFDE954E000 libcolorize.dll 00007FFDD98F0000-00007FFDD9903000 libcolormapping.dll 00007FFDD5640000-00007FFDD5651000 libcolorout.dll 00007FFDD5620000-00007FFDD5633000 libcolorreconstruct.dll 00007FFDE0EA0000-00007FFDE0EAF000 libcolortransfer.dll 00007FFDD5600000-00007FFDD561C000 libcolorzones.dll 00007FFDD55E0000-00007FFDD55F4000 libcrop.dll 00007FFDE0B80000-00007FFDE0B8F000 libdefringe.dll 00007FFDD3100000-00007FFDD314D000 libdemosaic.dll 00007FFDD37F0000-00007FFDD380E000 libdenoiseprofile.dll 00007FFDC3110000-00007FFDC3125000 libdiffuse.dll 00007FFDDE330000-00007FFDDE33F000 libdither.dll 00007FFDC30F0000-00007FFDC3105000 libequalizer.dll 00007FFDC30D0000-00007FFDC30E2000 libexposure.dll 00007FFDC30B0000-00007FFDC30C6000 libfilmic.dll 00007FFDC1820000-00007FFDC184B000 libfilmicrgb.dll 00007FFDDE290000-00007FFDDE29D000 libfinalscale.dll 00007FFDDE120000-00007FFDDE12F000 libflip.dll 00007FFDDDFA0000-00007FFDDDFB0000 libgamma.dll 00007FFDD98E0000-00007FFDD98F0000 libglobaltonemap.dll 00007FFDC1800000-00007FFDC1812000 libgraduatednd.dll 00007FFDD9660000-00007FFDD966E000 libgrain.dll 00007FFDD5F50000-00007FFDD5F60000 libhazeremoval.dll 00007FFDC16E0000-00007FFDC16FA000 libhighlights.dll 00007FFDD55D0000-00007FFDD55DE000 libhighpass.dll 00007FFDD37E0000-00007FFDD37EF000 libhotpixels.dll 00007FFDD3780000-00007FFDD3790000 libinvert.dll 00007FFDC16B0000-00007FFDC16D1000 liblens.dll 00007FFDC1680000-00007FFDC16A8000 liblensfun.dll 00007FFDC30A0000-00007FFDC30AD000 libsystre-0.dll 00007FFDC1660000-00007FFDC167F000 libtre-5.dll 00007FFDC1240000-00007FFDC1251000 liblevels.dll 00007FFDC1220000-00007FFDC123A000 libliquify.dll 00007FFDC0890000-00007FFDC08A2000 liblowlight.dll 00007FFDC2080000-00007FFDC2090000 liblowpass.dll 00007FFDC05C0000-00007FFDC05E0000 liblut3d.dll 00007FFD93440000-00007FFD93F28000 libgmic.dll 00007FFDBE800000-00007FFDBE811000 libfftw3_threads-3.dll 00007FFDBE7A0000-00007FFDBE7FA000 libGraphicsMagick++-12.dll 00007FFD96CF0000-00007FFD97139000 libopencv_core-409.dll 4.9.0.0 00007FFDA3EE0000-00007FFDA42F2000 libfftw3-3.dll 00007FFDB9B30000-00007FFDB9BD9000 libopencv_videoio-409.dll 4.9.0.0 00007FFDBC600000-00007FFDBC666000 libtbb12.dll 2021.11.0.0 00007FFD8D2C0000-00007FFD8FD4D000 libopenblas.dll 00007FFDB7420000-00007FFDB7692000 avformat-60.dll 60.16.100.0 00007FFD90E60000-00007FFD9230F000 avcodec-60.dll 60.31.102.0 00007FFDB90D0000-00007FFDB90EB000 libgstapp-1.0-0.dll 00007FFDB8E30000-00007FFDB8EB7000 libgstaudio-1.0-0.dll 00007FFDB8DA0000-00007FFDB8E24000 libgstbase-1.0-0.dll 00007FFDB9080000-00007FFDB90C3000 libgstpbutils-1.0-0.dll 00007FFD8C190000-00007FFD8D2BC000 avutil-58.dll 58.29.100.0 00007FFDB6EC0000-00007FFDB7000000 libgstreamer-1.0-0.dll 00007FFDB8D80000-00007FFDB8D9A000 libgstriff-1.0-0.dll 00007FFDB6DF0000-00007FFDB6EBB000 libgstvideo-1.0-0.dll 00007FFDB8CD0000-00007FFDB8D76000 swscale-7.dll 7.5.100.0 00007FFDB7BE0000-00007FFDB7C58000 libopencv_imgcodecs-409.dll 4.9.0.0 00007FFD92D90000-00007FFD93435000 libopencv_imgproc-409.dll 4.9.0.0 00007FFD969D0000-00007FFD96CE8000 libgfortran-5.dll 00007FFDB73C0000-00007FFDB7416000 libgme.dll 00007FFDB79C0000-00007FFDB7A23000 libbluray-2.dll 00007FFD92B80000-00007FFD92D82000 libgnutls-30.dll 00007FFDB6DC0000-00007FFDB6DEF000 librtmp-1.dll 00007FFDA3DF0000-00007FFDA3EDB000 libmodplug-1.dll 00007FFDA3D20000-00007FFDA3DEB000 libsrt.dll 00007FFDB6C90000-00007FFDB6CA7000 libgsm.dll 00007FFDB6D30000-00007FFDB6DB3000 libssh.dll 00007FFDB5910000-00007FFDB5992000 libmp3lame-0.dll 00007FFDB6C50000-00007FFDB6C82000 libopencore-amrnb-0.dll 00007FFDB6C30000-00007FFDB6C50000 libopencore-amrwb-0.dll 00007FFDB62C0000-00007FFDB62E9000 libspeex-1.dll 00007FFDB60C0000-00007FFDB612B000 libopus-0.dll 00007FFDB4AF0000-00007FFDB4B0F000 libtheoradec-1.dll 00007FFDB4AB0000-00007FFDB4AEE000 libtheoraenc-1.dll 00007FFDA4C20000-00007FFDA4C53000 libvorbis-0.dll 00007FFDA4BD0000-00007FFDA4C12000 libva.dll 00007FFD92AE0000-00007FFD92B75000 libvorbisenc-2.dll 00007FFD92A70000-00007FFD92ADB000 libvpl.dll 2.10.0.0 00007FFD92700000-00007FFD92A6F000 libvpx-1.dll 00007FFD90B70000-00007FFD90E54000 libx264-164.dll 0.164.3161.0 00007FFD925D0000-00007FFD926F1000 xvidcore.dll 00007FFDB4A80000-00007FFDB4AA9000 swresample-4.dll 4.12.100.0 00007FFD92570000-00007FFD925CB000 liborc-0.4-0.dll 00007FFD92520000-00007FFD9256D000 libgsttag-1.0-0.dll 00007FFDC1210000-00007FFDC121D000 libva_win32.dll 00007FFD924B0000-00007FFD92514000 libquadmath-0.dll 00007FFD90A00000-00007FFD90AAD000 libgmp-10.dll 00007FFD90AB0000-00007FFD90B68000 libbrotlienc.dll 00007FFD909B0000-00007FFD909FC000 libhogweed-6.dll 00007FFE14350000-00007FFE1437E000 ncrypt.dll 10.0.22621.3007 00007FFD90950000-00007FFD909A9000 libnettle-8.dll 00007FFD905A0000-00007FFD90703000 libp11-kit-0.dll 00007FFDB3D10000-00007FFDB3D30000 libtasn1-6.dll 00007FFDF7410000-00007FFDF7419000 WSOCK32.dll 10.0.22621.1 00007FFDA4BB0000-00007FFDA4BC3000 libogg-0.dll 00007FFD8C100000-00007FFD8C187000 libsoxr.dll 00007FFE14310000-00007FFE14347000 NTASN1.dll 10.0.22621.1 00007FFDC0C80000-00007FFDC0C8D000 libmask_manager.dll 00007FFD969B0000-00007FFD969C1000 libmonochrome.dll 00007FFD92490000-00007FFD924A3000 libnegadoctor.dll 00007FFDC0920000-00007FFDC092E000 libnlmeans.dll 00007FFDBFF20000-00007FFDBFF2F000 liboverexposed.dll 00007FFDBC560000-00007FFDBC570000 libprofile_gamma.dll 00007FFD92470000-00007FFD92483000 librawdenoise.dll 00007FFDB9A70000-00007FFDB9A7F000 librawoverexposed.dll 00007FFD90930000-00007FFD90942000 librawprepare.dll 00007FFDB8CC0000-00007FFDB8CCE000 librelight.dll 00007FFD90910000-00007FFD9092E000 libretouch.dll 00007FFD908F0000-00007FFD9090A000 librgbcurve.dll 00007FFD908D0000-00007FFD908E4000 librgblevels.dll 00007FFDB7BD0000-00007FFDB7BDE000 librotatepixels.dll 00007FFDB6C20000-00007FFDB6C2E000 libscalepixels.dll 00007FFD90580000-00007FFD90593000 libshadhi.dll 00007FFDB5900000-00007FFDB5910000 libsharpen.dll 00007FFDA5BD0000-00007FFDA5BDE000 libsoften.dll 00007FFDA5780000-00007FFDA5790000 libsplittoning.dll 00007FFD90560000-00007FFD90571000 libspots.dll 00007FFD8BFF0000-00007FFD8C0FA000 libtemperature.dll 00007FFD90540000-00007FFD90559000 libtonecurve.dll 00007FFD8FE30000-00007FFD8FE59000 libtoneequal.dll 00007FFDA4BA0000-00007FFDA4BAF000 libtonemap.dll 00007FFDA3D10000-00007FFDA3D1E000 libvelvia.dll 00007FFD90530000-00007FFD9053D000 libvibrance.dll 00007FFD8FE10000-00007FFD8FE21000 libvignette.dll 00007FFD8BFD0000-00007FFD8BFE4000 libwatermark.dll 00007FFD8BFB0000-00007FFD8BFC3000 libzonesystem.dll 00007FFD8FE00000-00007FFD8FE0D000 libbackgroundjobs.dll 00007FFD8BF90000-00007FFD8BFAB000 libcollect.dll 00007FFD8BF70000-00007FFD8BF81000 libcolorpicker.dll 00007FFD8BF60000-00007FFD8BF70000 libduplicate.dll 00007FFD8BF40000-00007FFD8BF57000 libexport.dll 00007FFD8BF30000-00007FFD8BF3D000 libfilmstrip.dll 00007FFD8BF20000-00007FFD8BF2F000 libfilter.dll 00007FFD8BF00000-00007FFD8BF16000 libgeotagging.dll 00007FFD8BEE0000-00007FFD8BF00000 libosmgpsmap-1.0-1.dll 00007FFD8BED0000-00007FFD8BEDD000 libhinter.dll 00007FFD8BEB0000-00007FFD8BEC8000 libhistogram.dll 00007FFD8BE90000-00007FFD8BEA4000 libhistory.dll 00007FFD8BE80000-00007FFD8BE8D000 libimage_infos.dll 00007FFD8BE70000-00007FFD8BE7D000 libioporder.dll 00007FFD8BE60000-00007FFD8BE6D000 liblighttable_mode.dll 00007FFD8BE50000-00007FFD8BE5F000 liblocation.dll 00007FFD8BE30000-00007FFD8BE41000 libmap_locations.dll 00007FFD8BE20000-00007FFD8BE2E000 libmap_settings.dll 00007FFD8BE00000-00007FFD8BE16000 libmasks.dll 00007FFD8BDF0000-00007FFD8BDFD000 libmasktoolbar.dll 00007FFD8BDC0000-00007FFD8BDE2000 libmenu.dll 00007FFD8BDB0000-00007FFD8BDBD000 libmenubuttons.dll 00007FFD8BD90000-00007FFD8BDA1000 libmetadata.dll 00007FFD8BD70000-00007FFD8BD82000 libmetadata_view.dll 00007FFD8BD50000-00007FFD8BD69000 libmidi.dll 00007FFD8BD30000-00007FFD8BD41000 libportmidi.dll 00007FFE024B0000-00007FFE024D9000 winmmbase.dll 10.0.22621.1 00007FFE09440000-00007FFE094DD000 MMDevAPI.DLL 10.0.22621.2506 00007FFDED6E0000-00007FFDED726000 wdmaud.drv 10.0.22621.1 00007FFE0F2A0000-00007FFE0F2AB000 AVRT.dll 10.0.22621.2506 00007FFE05520000-00007FFE05529000 ksuser.dll 10.0.22621.1 00007FFDF7450000-00007FFDF763C000 AUDIOSES.DLL 10.0.22621.2506 00007FFDF9050000-00007FFDF905E000 msacm32.drv 10.0.22621.2506 00007FFDEDB20000-00007FFDEDB3E000 MSACM32.dll 10.0.22621.1 00007FFDEFD90000-00007FFDEFD9B000 midimap.dll 10.0.22621.2506 00007FFD8BD20000-00007FFD8BD2F000 libmodulegroups.dll 00007FFD8BD10000-00007FFD8BD1D000 libmodule_toolbox.dll 00007FFD8BD00000-00007FFD8BD10000 libnavigation.dll 00007FFD8BCF0000-00007FFD8BCFF000 libsnapshots.dll 00007FFD8BCD0000-00007FFD8BCE1000 libstyles.dll 00007FFD8BCB0000-00007FFD8BCCC000 libtagging.dll 00007FFD8BCA0000-00007FFD8BCAD000 libview_toolbox.dll 00007FFD8BC90000-00007FFD8BC9D000 libpixbufloader-svg.dll 00007FFE13EF0000-00007FFE13F59000 mswsock.dll 10.0.22621.2506 00007FFD8BC80000-00007FFD8BC88000 wshunix.dll 10.0.22621.1 00007FFDFADE0000-00007FFDFADFE000 MPR.dll 10.0.22621.1 00007FFDE30A0000-00007FFDE30C9000 p9np.dll 10.0.22621.2506 00007FFDEBAC0000-00007FFDEBACC000 drprov.dll 10.0.22621.1 00007FFDE6630000-00007FFDE6649000 ntlanman.dll 10.0.22621.2506 00007FFDE0AF0000-00007FFDE0B0F000 davclnt.dll 10.0.22621.1 00007FFE14710000-00007FFE14776000 WINSTA.dll 10.0.22621.2506 00007FFE0E890000-00007FFE0E8AA000 wkscli.dll 10.0.22621.2506 00007FFDF1A00000-00007FFDF1A12000 cscapi.dll 10.0.22621.1 00007FFE134E0000-00007FFE134EC000 netutils.dll 10.0.22621.2506 00007FFDE2AD0000-00007FFDE2AE1000 libpixbufloader-jpeg.dll 00007FFD90710000-00007FFD908CC000 DUI70.dll 10.0.22621.2506 00007FFDB4B90000-00007FFDB4C29000 DUser.dll 10.0.22621.1 00007FFDF6590000-00007FFDF6640000 TextShaping.dll 10.0.22621.2506 00007FFDF19C0000-00007FFDF19E8000 edputil.dll 10.0.22621.1 00007FFDF7790000-00007FFDF77F9000 oleacc.dll 7.2.22621.1 00007FFDDB9F0000-00007FFDDBA5B000 thumbcache.dll 10.0.22621.2506 00007FFDF4C20000-00007FFDF4C7E000 dataexchange.dll 10.0.22621.2506 00007FFE0C950000-00007FFE0CBD5000 twinapi.appcore.dll 10.0.22621.2506 00007FFDF7130000-00007FFDF72EB000 Windows.Globalization.dll 10.0.22621.2506 00007FFDF1650000-00007FFDF1679000 globinputhost.dll 10.0.22621.2792 00007FFDFD730000-00007FFDFD790000 Bcp47Langs.dll 10.0.22621.2506 00007FFE0F2D0000-00007FFE0F307000 xmllite.dll 10.0.22621.2506 00007FFE03EA0000-00007FFE03F5B000 StructuredQuery.dll 7.0.22621.2506 00007FFE0C020000-00007FFE0C645000 OneCoreUAPCommonProxyStub.dll 10.0.22621.2792 00007FFDF4C80000-00007FFDF4C8E000 atlthunk.dll 10.0.22621.1 00007FFDD3EF0000-00007FFDD3F95000 Windows.FileExplorer.Common.dll 10.0.22621.2506 00007FFDEF820000-00007FFDEF8CB000 OneCoreCommonProxyStub.dll 10.0.22621.3007 00007FFDEF240000-00007FFDEF329000 Windows.Storage.Search.dll 10.0.22621.2506 00007FFDE3200000-00007FFDE36F3000 windowsudk.shellcommon.dll 10.0.22621.3007 00007FFDFA000000-00007FFDFA175000 Windows.UI.Immersive.dll 10.0.22621.2506 00007FFE0F9A0000-00007FFE0FA45000 policymanager.dll 10.0.22621.2792 00007FFE0F8F0000-00007FFE0F983000 msvcp110_win.dll 10.0.22621.1 00007FFE172E0000-00007FFE1735F000 coml2.dll 10.0.22621.2506 00007FFDEA870000-00007FFDEA87D000 LINKINFO.dll 10.0.22621.1 00007FFE11900000-00007FFE11997000 apphelp.dll 10.0.22621.2506 00007FFDDE2E0000-00007FFDDE2FA000 NetworkExplorer.dll 10.0.22621.1 00007FFE08070000-00007FFE081A5000 Windows.System.Launcher.dll 10.0.22621.2506 00007FFDC07F0000-00007FFDC0847000 dlnashext.dll 10.0.22621.2715 00007FFD8BC10000-00007FFD8BC79000 PlayToDevice.dll 10.0.22621.1 00007FFE011D0000-00007FFE011F1000 DevDispItemProvider.dll 10.0.22621.2506 00007FFDDBC10000-00007FFDDBCA4000 ntshrui.dll 10.0.22621.2506 00007FFE13CE0000-00007FFE13D23000 SspiCli.dll 10.0.22621.3007 00007FFE06C00000-00007FFE06C28000 srvcli.dll 10.0.22621.2506 00007FFD8FD50000-00007FFD8FDF3000 wpdshext.dll 10.0.22621.2715 00007FFDD3650000-00007FFDD36EE000 PortableDeviceApi.dll 10.0.22621.1 00007FFDDADF0000-00007FFDDAE27000 EhStorShell.dll 10.0.22621.1 00007FFD8BBE0000-00007FFD8BC04000 EhStorAPI.dll 10.0.22621.1 00007FFE13830000-00007FFE13844000 WTSAPI32.dll 10.0.22621.1 00007FFDDAD10000-00007FFDDADE2000 cscui.dll 10.0.22621.2506 00007FFDDE4F0000-00007FFDDE51D000 cldapi.dll 10.0.22621.2506 00007FFDDC4B0000-00007FFDDC759000 explorerframe.dll 10.0.22621.2792

Windows 10.0.22621.2506 DrMingw 0.9.11 """"""""""""""""""

What you try to do is impossible.

$EXIF_YEAR is a property of picture files that you can re-use to rename them. Folders have no EXIF properties and therefore you can't use the $EXIF_xxx variables on them (move, import, rename, etc.).

Folders have their own way of declaring date, which uses the GUI value from import dialog: $YEAR, $MONTH, $DAY.

Not sure what can be done from GUI to prevent users from trying to use EXIF variables in folders naming.

Bonjour. Merci pour la réponse. J aurais voulu faire comme sur darktable où les exif des photos sont utilisés pour nommer les répertoires et ainsi faire comme sur lightroom. Je me suis peut être mal fais comprendre sur mon ticket github. Ce que j'ai fais c est que d'utiliser darktable pour l import (in avouable ;)) et j ajoute les photos ainsi importés dans la collection d'Ansel.  C est peut être osé de mixer les deux mais ça va le faire. Dans le pire des cas J abîme les xmp. Courage pour Ansel. J espère que ça va prendre. Cordialement

Elie

⁣Get BlueMail for Android ​

On Jan 29, 2024, 5:34 PM, at 5:34 PM, "Aurélien PIERRE" @.***> wrote:

What you try to do is impossible.

$EXIF_YEAR is a property of picture files that you can re-use to rename them. Folders have no EXIF properties and therefore you can't use the $EXIF_xxx variables on them (move, import, rename, etc.).

Folders have their own way of declaring date, which uses the GUI value from import dialog: $YEAR, $MONTH, $DAY.

Not sure what can be done from GUI to prevent users from trying to use EXIF variables in folders naming.

-- Reply to this email directly or view it on GitHub: https://github.com/aurelienpierreeng/ansel/issues/320#issuecomment-1915086531 You are receiving this because you authored the thread.

Message ID: @.***>

Je ne comprends pas le but recherché par rapport au workflow et le problème ici.

L'option "copy to disk" créée un nouveau dossier dont le nom est spécifié à l'import. Utiliser les valeurs EXIF n'a pas de sens à ce niveau (peu import si darktable ou lightroom le permettent ou pas) puisque les EXIFs sont la propriété des images.

Quel est le comportement désiré si un événement (au hasard: un mariage) a des photos prises avant et après minuit ? On divise le set de photos en deux arbitrairement en fonction de la date des photos ? L'utilisation globale des EXIFs au niveau des dossiers est un non-sens de design. L'utilisation d'une date spécifiée manuellement lève les ambiguités et les limite les problèmes possibles.

Bonsoir. En effet je comprends votre point de vu. J ai toujours fait en sorte de pouvoir me passer des logiciel de traitement d image au cas où ils deviennent indisponibles. C est pourquoi je profite de cette capacité pour ranger mes photo comme ça. Je sais que je pourrais les ordonner avec des tags par exemple. Par ailleurs je ne suis pas pro, je ne suis pas organisé, je travaille les photos par projet quand J en ai le temps mais étant en plus assez irrégulier, les automatisations des logiciels me sont d une grande utilité. Pour gérer mes sauvegardes aussi c est pratique d avoir une régularité chronologique

Mais si Ansel ne le fait pas c est pas grave.

Bon courage.

Cordialement.

On Jan 29, 2024, 8:14 PM, at 8:14 PM, "Aurélien PIERRE" @.***> wrote:

Je ne comprends pas le but recherché par rapport au workflow et le problème ici.

L'option "copy to disk" créée un nouveau dossier dont le nom est spécifié à l'import. Utiliser les valeurs EXIF n'a pas de sens à ce niveau (peu import si darktable ou lightroom le permettent ou pas) puisque les EXIFs sont la propriété des images.

Quel est le comportement désiré si un événement (au hasard: un mariage) a des photos prises avant et après minuit ? On divise le set de photos en deux arbitrairement en fonction de la date des photos ? L'utilisation globale des EXIFs au niveau des dossiers est un non-sens de design. L'utilisation d'une date spécifiée manuellement lève les ambiguités et les limite les problèmes possibles.

-- Reply to this email directly or view it on GitHub: https://github.com/aurelienpierreeng/ansel/issues/320#issuecomment-1915391596 You are receiving this because you authored the thread.

Message ID: @.***>

CI: update to latest conan 1

Signed-off-by: Rosen Penev

Refs. https://github.com/darktable-org/rawspeed/issues/403 Refs. https://github.com/darktable-org/darktable/issues/14969 Refs. https://github.com/darktable-org/darktable/issues/15480 Refs. https://github.com/darktable-org/darktable/issues/13782

Codecov Report

Attention: 75 lines in your changes are missing coverage. Please review.

Comparison is base (c7cd7ff) 60.72% compared to head (f3bf815) 60.95%.

Files Patch % Lines
...rc/librawspeed/decompressors/LJpegDecompressor.cpp 47.76% 34 Missing and 1 partial :warning:
src/librawspeed/decoders/ArwDecoder.cpp 5.26% 17 Missing and 1 partial :warning:
src/librawspeed/adt/CroppedArray1DRef.h 31.25% 11 Missing :warning:
src/librawspeed/decompressors/LJpegDecoder.cpp 71.42% 4 Missing :warning:
src/librawspeed/decompressors/JpegMarkers.h 75.00% 2 Missing and 1 partial :warning:
...zz/librawspeed/decompressors/LJpegDecompressor.cpp 0.00% 2 Missing :warning:
src/librawspeed/decompressors/Cr2LJpegDecoder.cpp 50.00% 1 Missing :warning:
...brawspeed/decompressors/HasselbladLJpegDecoder.cpp 50.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #629 +/- ##
===========================================
+ Coverage 60.72% 60.95% +0.22% 
===========================================
 Files 265 266 +1 
 Lines 15841 15917 +76 
 Branches 2042 2053 +11 
===========================================
+ Hits 9620 9702 +82 
+ Misses 6095 6084 -11 
- Partials 126 131 +5 
Flag Coverage Δ
benchmarks 10.65% <0.00%> (-0.06%) :arrow_down:
integration 46.13% <66.30%> (+0.50%) :arrow_up:
linux 57.38% <53.98%> (+0.30%) :arrow_up:
macOS 24.20% <0.00%> (-0.10%) :arrow_down:
rpu_u 46.13% <66.30%> (+0.50%) :arrow_up:
unittests 21.57% <0.00%> (-0.11%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

This is roughly it i guess, now "just" need to benchmark things...

Merge pull request #628 from LebedevRI/ci

Bump all(?) github actions to their latest versions

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (29aff2e) 60.72% compared to head (15fdb29) 60.72%.

@@ Coverage Diff @@
## develop #628 +/- ##
========================================
 Coverage 60.72% 60.72% 
========================================
 Files 265 265 
 Lines 15841 15841 
 Branches 2042 2042 
========================================
 Hits 9620 9620 
 Misses 6095 6095 
 Partials 126 126 
Flag Coverage Δ
benchmarks 10.70% <ø> (ø)
integration 45.62% <ø> (ø)
linux 57.07% <ø> (ø)
macOS 24.29% <ø> (ø)
rpu_u 45.62% <ø> (ø)
unittests 21.68% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #627 from LebedevRI/dng-jpeg

(DNG/Sony) LJpeg decompressor: make it -10% faster. WOW?

AbstractLJpegDecoder: decodeScan() must return scan byte length

Assuming that the implementation does not forget to advance the input's pos accordingly clearly doesn't work, let's just enforce it globally.

(DNG/Sony) LJpeg decompressor: make it -10% faster. WOW?

Comparing /home/lebedevri/rawspeed/build-Clang17-release/src/utilities/rsbench/rsbench-old to /home/lebedevri/rawspeed/build-Clang17-release/src/utilities/rsbench/rsbench
Benchmark Time CPU Time Old Time New CPU Old CPU New
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:32/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:32/process_time/real_time_mean -0.0615 -0.0619 12 11 377 354
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:32/process_time/real_time_median -0.0498 -0.0507 12 11 373 354
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:32/process_time/real_time_stddev -0.2116 -0.2142 0 0 10 8
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:32/process_time/real_time_cv -0.1600 -0.1623 0 0 0 0
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:32/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:32/process_time/real_time_mean -0.1044 -0.1049 16 15 517 463
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:32/process_time/real_time_median -0.1108 -0.1110 16 14 515 458
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:32/process_time/real_time_stddev -0.0016 -0.0047 0 0 12 12
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:32/process_time/real_time_cv +0.1147 +0.1119 0 0 0 0
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:32/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:32/process_time/real_time_mean -0.2185 -0.2185 5 4 163 128
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:32/process_time/real_time_median -0.2150 -0.2147 5 4 161 126
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:32/process_time/real_time_stddev -0.4047 -0.4066 0 0 5 3
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:32/process_time/real_time_cv -0.2382 -0.2407 0 0 0 0
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:32/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:32/process_time/real_time_mean -0.0791 -0.0801 31 29 1000 920
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:32/process_time/real_time_median -0.0877 -0.0884 31 29 995 907
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:32/process_time/real_time_stddev -0.0990 -0.1090 1 1 23 21
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:32/process_time/real_time_cv -0.0215 -0.0314 0 0 0 0
Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:32/process_time/real_time_pvalue 0.4781 0.4363 U Test, Repetitions: 27 vs 27
Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:32/process_time/real_time_mean -0.0048 -0.0048 29 29 930 925
Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:32/process_time/real_time_median +0.0029 +0.0059 29 29 922 927
Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:32/process_time/real_time_stddev -0.0006 -0.0040 1 1 25 25
Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:32/process_time/real_time_cv +0.0042 +0.0009 0 0 0 0
OVERALL_GEOMEAN -0.0965 -0.0968 0 0 0 0

Comparing /home/lebedevri/rawspeed/build-Clang17-release/src/utilities/rsbench/rsbench-old to /home/lebedevri/rawspeed/build-Clang17-release/src/utilities/rsbench/rsbench
Benchmark Time CPU Time Old Time New CPU Old CPU New
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:32/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:32/process_time/real_time_mean -0.0615 -0.0619 12 11 377 354
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:32/process_time/real_time_median -0.0498 -0.0507 12 11 373 354
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:32/process_time/real_time_stddev -0.2116 -0.2142 0 0 10 8
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9394-compressed-lossless.DNG/threads:32/process_time/real_time_cv -0.1600 -0.1623 0 0 0 0
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:32/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:32/process_time/real_time_mean -0.1044 -0.1049 16 15 517 463
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:32/process_time/real_time_median -0.1108 -0.1110 16 14 515 458
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:32/process_time/real_time_stddev -0.0016 -0.0047 0 0 12 12
./Adobe DNG Converter/Canon EOS 5D Mark III/5G4A9395-compressed-lossless.DNG/threads:32/process_time/real_time_cv +0.1147 +0.1119 0 0 0 0
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:32/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:32/process_time/real_time_mean -0.2185 -0.2185 5 4 163 128
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:32/process_time/real_time_median -0.2150 -0.2147 5 4 161 126
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:32/process_time/real_time_stddev -0.4047 -0.4066 0 0 5 3
./Fujifilm/X100S/fujifilm-x100s-daylight-DSCF9505.dng/threads:32/process_time/real_time_cv -0.2382 -0.2407 0 0 0 0
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:32/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:32/process_time/real_time_mean -0.0791 -0.0801 31 29 1000 920
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:32/process_time/real_time_median -0.0877 -0.0884 31 29 995 907
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:32/process_time/real_time_stddev -0.0990 -0.1090 1 1 23 21
./Adobe DNG Converter/Canon EOS 5D Mark IV/B13A0729.dng/threads:32/process_time/real_time_cv -0.0215 -0.0314 0 0 0 0
Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:32/process_time/real_time_pvalue 0.4781 0.4363 U Test, Repetitions: 27 vs 27
Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:32/process_time/real_time_mean -0.0048 -0.0048 29 29 930 925
Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:32/process_time/real_time_median +0.0029 +0.0059 29 29 922 927
Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:32/process_time/real_time_stddev -0.0006 -0.0040 1 1 25 25
Sony/ILCE-7RM5/7RM5-S35-LosslessCompressedMedium.ARW/threads:32/process_time/real_time_cv +0.0042 +0.0009 0 0 0 0
OVERALL_GEOMEAN -0.0965 -0.0968 0 0 0 0

Codecov Report

Attention: 7 lines in your changes are missing coverage. Please review.

Comparison is base (cb43e8e) 60.72% compared to head (9f8d1b1) 60.72%.

Files Patch % Lines
...rc/librawspeed/decompressors/LJpegDecompressor.cpp 70.00% 3 Missing :warning:
...zz/librawspeed/decompressors/DummyLJpegDecoder.cpp 0.00% 2 Missing :warning:
...zz/librawspeed/decompressors/LJpegDecompressor.cpp 0.00% 1 Missing :warning:
...librawspeed/decompressors/AbstractLJpegDecoder.cpp 66.66% 1 Missing :warning:
@@ Coverage Diff @@
## develop #627 +/- ##
========================================
 Coverage 60.72% 60.72% 
========================================
 Files 265 265 
 Lines 15845 15841 -4 
 Branches 2042 2044 +2 
========================================
- Hits 9622 9620 -2 
+ Misses 6097 6095 -2 
 Partials 126 126 
Flag Coverage Δ
benchmarks 10.70% <0.00%> (+<0.01%) :arrow_up:
integration 45.62% <71.42%> (-0.01%) :arrow_down:
linux 57.07% <68.18%> (-0.01%) :arrow_down:
macOS 24.29% <0.00%> (+<0.01%) :arrow_up:
rpu_u 45.62% <71.42%> (-0.01%) :arrow_down:
unittests 21.68% <0.00%> (+<0.01%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #626 from LebedevRI/cr2

Cr2Decompressor: make whole-image decoding -10% faster. Wow?

Cr2Decompressor: make whole-image decoding -10% faster. Wow?

Comparing src/utilities/rsbench/rsbench-old to src/utilities/rsbench/rsbench
Benchmark Time CPU Time Old Time New CPU Old CPU New
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
raw.pixls.us-unique/Canon/EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_mean -0.0796 -0.0796 27 25 27 25
raw.pixls.us-unique/Canon/EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_median -0.0796 -0.0796 27 25 27 25
raw.pixls.us-unique/Canon/EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_stddev +0.4248 +0.4262 0 0 0 0
raw.pixls.us-unique/Canon/EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_cv +0.5481 +0.5496 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_mean -0.1362 -0.1359 105 91 3354 2898
raw.pixls.us-unique/Canon/EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_median -0.1392 -0.1386 105 91 3354 2890
raw.pixls.us-unique/Canon/EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_stddev +0.7775 +0.4946 1 1 19 29
raw.pixls.us-unique/Canon/EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_cv +1.0577 +0.7297 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_mean -0.0665 -0.0665 55 52 55 52
raw.pixls.us-unique/Canon/EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_median -0.0665 -0.0665 55 52 55 52
raw.pixls.us-unique/Canon/EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_stddev -0.9041 -0.9014 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_cv -0.8972 -0.8943 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_mean -0.1502 -0.1502 225 191 225 191
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_median -0.1496 -0.1496 225 192 225 192
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_stddev +3.8736 +3.9278 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_cv +4.7349 +4.7987 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.6588 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_mean -0.0764 +0.0020 267 247 6984 6998
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_median -0.0756 -0.0010 267 247 6992 6984
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_stddev +0.8443 +0.0099 1 1 38 38
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_cv +0.9967 +0.0080 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_mean -0.0689 -0.0689 154 144 154 144
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_median -0.0682 -0.0682 154 144 154 144
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_stddev +6.6731 +6.8931 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_cv +7.2405 +7.4768 0 0 0 0
raw.pixls.us-unique/Canon/EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_mean -0.1334 -0.1334 151 131 151 131
raw.pixls.us-unique/Canon/EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_median -0.1333 -0.1333 151 131 151 131
raw.pixls.us-unique/Canon/EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_stddev +0.1553 +0.1940 0 0 0 0
raw.pixls.us-unique/Canon/EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_cv +0.3331 +0.3777 0 0 0 0
OVERALL_GEOMEAN -0.1022 -0.0917 0 0 0 0
Comparing src/utilities/rsbench/rsbench-old to src/utilities/rsbench/rsbench
Benchmark Time CPU Time Old Time New CPU Old CPU New
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
raw.pixls.us-unique/Canon/EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_mean -0.0796 -0.0796 27 25 27 25
raw.pixls.us-unique/Canon/EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_median -0.0796 -0.0796 27 25 27 25
raw.pixls.us-unique/Canon/EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_stddev +0.4248 +0.4262 0 0 0 0
raw.pixls.us-unique/Canon/EOS 40D/_MG_0154.CR2/threads:32/process_time/real_time_cv +0.5481 +0.5496 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_mean -0.1362 -0.1359 105 91 3354 2898
raw.pixls.us-unique/Canon/EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_median -0.1392 -0.1386 105 91 3354 2890
raw.pixls.us-unique/Canon/EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_stddev +0.7775 +0.4946 1 1 19 29
raw.pixls.us-unique/Canon/EOS 5D Mark II/09.canon.sraw1.cr2/threads:32/process_time/real_time_cv +1.0577 +0.7297 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_mean -0.0665 -0.0665 55 52 55 52
raw.pixls.us-unique/Canon/EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_median -0.0665 -0.0665 55 52 55 52
raw.pixls.us-unique/Canon/EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_stddev -0.9041 -0.9014 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5D Mark II/10.canon.sraw2.cr2/threads:32/process_time/real_time_cv -0.8972 -0.8943 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_mean -0.1502 -0.1502 225 191 225 191
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_median -0.1496 -0.1496 225 192 225 192
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_stddev +3.8736 +3.9278 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9927.CR2/threads:32/process_time/real_time_cv +4.7349 +4.7987 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.6588 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_mean -0.0764 +0.0020 267 247 6984 6998
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_median -0.0756 -0.0010 267 247 6992 6984
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_stddev +0.8443 +0.0099 1 1 38 38
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9928.CR2/threads:32/process_time/real_time_cv +0.9967 +0.0080 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_mean -0.0689 -0.0689 154 144 154 144
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_median -0.0682 -0.0682 154 144 154 144
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_stddev +6.6731 +6.8931 0 0 0 0
raw.pixls.us-unique/Canon/EOS 5DS/2K4A9929.CR2/threads:32/process_time/real_time_cv +7.2405 +7.4768 0 0 0 0
raw.pixls.us-unique/Canon/EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
raw.pixls.us-unique/Canon/EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_mean -0.1334 -0.1334 151 131 151 131
raw.pixls.us-unique/Canon/EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_median -0.1333 -0.1333 151 131 151 131
raw.pixls.us-unique/Canon/EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_stddev +0.1553 +0.1940 0 0 0 0
raw.pixls.us-unique/Canon/EOS 77D/IMG_4049.CR2/threads:32/process_time/real_time_cv +0.3331 +0.3777 0 0 0 0
OVERALL_GEOMEAN -0.1022 -0.0917 0 0 0 0

Merge pull request #625 from LebedevRI/jpeg

BitStreamerJPEG: better JPEG Marker handling

BitStreamerJPEG::fillCache(): don't mark JPEG Marker bytes (and further) as consumed

The external code may want to pick up where we've stopped, starting with the marker, and if we move position past it, then that is harder to do, and e.g. would require to re-scan our whole input buffer in search of the marker.

This somewhat partially breaks truncated input detection, since we'll always continue claiming that we've filled cache with zeros, not sure what to do about that.

It's possible we may want to track the actual (not effective) fill level in with-asserts builds, but as usual, truncated input is an exceptional situation, and we don't need to go out of our way to detect it, especially at the cost of good-path performance.

This does not seem to be slower per-se, <=0.5%:

build-Clang17-release$ /usr/src/googlebenchmark/tools/compare.py -a benchmarks bench/librawspeed/io/BitStreamerJPEGBenchmark{-old,} --benchmark_min_warmup_time=0.5 --benchmark_repetitions=27
RUNNING: bench/librawspeed/io/BitStreamerJPEGBenchmark-old --benchmark_min_warmup_time=0.5 --benchmark_repetitions=27 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmpugrhe43a
2024-01-26T21:38:22+03:00
Running bench/librawspeed/io/BitStreamerJPEGBenchmark-old
Run on (32 X 3400 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 0.64, 0.99, 0.96
-------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
-------------------------------------------------------------------------------------------------------
BM_BitStreamerJPEG/Stuffed/16777216_mean 8165 us 8165 us 27 Latency=486.665ps Throughput=1.91368Gi/s
BM_BitStreamerJPEG/Stuffed/16777216_median 8165 us 8165 us 27 Latency=486.656ps Throughput=1.91372Gi/s
BM_BitStreamerJPEG/Stuffed/16777216_stddev 2.54 us 2.53 us 27 Latency=150.916fs Throughput=622.203Ki/s
BM_BitStreamerJPEG/Stuffed/16777216_cv 0.03 % 0.03 % 27 Latency=0.03% Throughput=0.03%
BM_BitStreamerJPEG/Unstuffed/16777216_mean 7415 us 7415 us 27 Latency=441.955ps Throughput=2.10728Gi/s
BM_BitStreamerJPEG/Unstuffed/16777216_median 7415 us 7415 us 27 Latency=441.953ps Throughput=2.10729Gi/s
BM_BitStreamerJPEG/Unstuffed/16777216_stddev 0.608 us 0.587 us 27 Latency=34.9777fs Throughput=174.865Ki/s
BM_BitStreamerJPEG/Unstuffed/16777216_cv 0.01 % 0.01 % 27 Latency=0.01% Throughput=0.01%
RUNNING: bench/librawspeed/io/BitStreamerJPEGBenchmark --benchmark_min_warmup_time=0.5 --benchmark_repetitions=27 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmpri1qej4p
2024-01-26T21:39:47+03:00
Running bench/librawspeed/io/BitStreamerJPEGBenchmark
Run on (32 X 3400 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 0.97, 1.02, 0.98
-------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
-------------------------------------------------------------------------------------------------------
BM_BitStreamerJPEG/Stuffed/16777216_mean 8216 us 8215 us 27 Latency=489.681ps Throughput=1.9019Gi/s
BM_BitStreamerJPEG/Stuffed/16777216_median 8210 us 8209 us 27 Latency=489.309ps Throughput=1.90334Gi/s
BM_BitStreamerJPEG/Stuffed/16777216_stddev 13.1 us 13.1 us 27 Latency=780.748fs Throughput=3.10062Mi/s
BM_BitStreamerJPEG/Stuffed/16777216_cv 0.16 % 0.16 % 27 Latency=0.16% Throughput=0.16%
BM_BitStreamerJPEG/Unstuffed/16777216_mean 7417 us 7417 us 27 Latency=442.082ps Throughput=2.10668Gi/s
BM_BitStreamerJPEG/Unstuffed/16777216_median 7417 us 7417 us 27 Latency=442.085ps Throughput=2.10666Gi/s
BM_BitStreamerJPEG/Unstuffed/16777216_stddev 0.315 us 0.289 us 27 Latency=17.2337fs Throughput=86.1152Ki/s
BM_BitStreamerJPEG/Unstuffed/16777216_cv 0.00 % 0.00 % 27 Latency=0.00% Throughput=0.00%
Comparing bench/librawspeed/io/BitStreamerJPEGBenchmark-old to bench/librawspeed/io/BitStreamerJPEGBenchmark
Benchmark Time CPU Time Old Time New CPU Old CPU New
-------------------------------------------------------------------------------------------------------------------------------------------
BM_BitStreamerJPEG/Stuffed/16777216_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
BM_BitStreamerJPEG/Stuffed/16777216_mean +0.0062 +0.0062 8165 8216 8165 8215
BM_BitStreamerJPEG/Stuffed/16777216_median +0.0055 +0.0055 8165 8210 8165 8209
BM_BitStreamerJPEG/Stuffed/16777216_stddev +4.1559 +4.1734 3 13 3 13
BM_BitStreamerJPEG/Stuffed/16777216_cv +4.1241 +4.1415 0 0 0 0
BM_BitStreamerJPEG/Unstuffed/16777216_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
BM_BitStreamerJPEG/Unstuffed/16777216_mean +0.0003 +0.0003 7415 7417 7415 7417
BM_BitStreamerJPEG/Unstuffed/16777216_median +0.0003 +0.0003 7415 7417 7415 7417
BM_BitStreamerJPEG/Unstuffed/16777216_stddev -0.4821 -0.5073 1 0 1 0
BM_BitStreamerJPEG/Unstuffed/16777216_cv -0.4822 -0.5074 0 0 0 0
OVERALL_GEOMEAN +0.0032 +0.0032 0 0 0 0

This reverts commit 4c7496309ec6bd4d113ad960de0b94e4332a5d7c.

Will come in handy for restart intervals later on.

Codecov Report

Attention: 17 lines in your changes are missing coverage. Please review.

Comparison is base (a8e30a3) 60.58% compared to head (f47140a) 60.57%.

Files Patch % Lines
src/librawspeed/io/BitStreamerJPEG.h 0.00% 17 Missing :warning:
@@ Coverage Diff @@
## develop #625 +/- ##
===========================================
- Coverage 60.58% 60.57% -0.02% 
===========================================
 Files 265 265 
 Lines 15844 15847 +3 
 Branches 2042 2042 
===========================================
 Hits 9599 9599 
- Misses 6122 6125 +3 
 Partials 123 123 
Flag Coverage Δ
benchmarks 10.58% <0.00%> (-0.01%) :arrow_down:
integration 45.55% <ø> (+<0.01%) :arrow_up:
linux 57.00% <0.00%> (+<0.01%) :arrow_up:
macOS 24.28% <0.00%> (-0.01%) :arrow_down:
rpu_u 45.55% <ø> (+<0.01%) :arrow_up:
unittests 21.53% <0.00%> (-0.05%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

BitStreamerJPEG::fillCache(): only increment p when needed

We may need to not advance past marker later on, making advancing more explicit makes it easier.

BitStreamerJPEG::fillCache(): improve short-circuiting of good case (-1%)

std::none_of() & friends early-returns, which results in horrible CFG in our case. But std::accumulate() looks much nicer.

build-Clang17-release$ /usr/src/googlebenchmark/tools/compare.py -a benchmarks bench/librawspeed/io/BitStreamerJPEGBenchmark{-old,} --benchmark_repetitions=27 --benchmark_min_warmup_time=0.5
RUNNING: bench/librawspeed/io/BitStreamerJPEGBenchmark-old --benchmark_repetitions=27 --benchmark_min_warmup_time=0.5 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmpf9vjrabb
2024-01-26T21:11:36+03:00
Running bench/librawspeed/io/BitStreamerJPEGBenchmark-old
Run on (32 X 3400 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 0.62, 1.01, 1.02
-------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
-------------------------------------------------------------------------------------------------------
BM_BitStreamerJPEG/Stuffed/16777216_mean 8260 us 8260 us 27 Latency=492.317ps Throughput=1.89171Gi/s
BM_BitStreamerJPEG/Stuffed/16777216_median 8260 us 8260 us 27 Latency=492.308ps Throughput=1.89175Gi/s
BM_BitStreamerJPEG/Stuffed/16777216_stddev 2.79 us 2.78 us 27 Latency=165.976fs Throughput=668.816Ki/s
BM_BitStreamerJPEG/Stuffed/16777216_cv 0.03 % 0.03 % 27 Latency=0.03% Throughput=0.03%
BM_BitStreamerJPEG/Unstuffed/16777216_mean 7448 us 7447 us 27 Latency=443.897ps Throughput=2.09806Gi/s
BM_BitStreamerJPEG/Unstuffed/16777216_median 7448 us 7447 us 27 Latency=443.896ps Throughput=2.09807Gi/s
BM_BitStreamerJPEG/Unstuffed/16777216_stddev 0.272 us 0.238 us 27 Latency=14.1998fs Throughput=70.3722Ki/s
BM_BitStreamerJPEG/Unstuffed/16777216_cv 0.00 % 0.00 % 27 Latency=0.00% Throughput=0.00%
RUNNING: bench/librawspeed/io/BitStreamerJPEGBenchmark --benchmark_repetitions=27 --benchmark_min_warmup_time=0.5 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmp8ulpn1sa
2024-01-26T21:13:01+03:00
Running bench/librawspeed/io/BitStreamerJPEGBenchmark
Run on (32 X 3400 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 1.16, 1.10, 1.05
-------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
-------------------------------------------------------------------------------------------------------
BM_BitStreamerJPEG/Stuffed/16777216_mean 8165 us 8165 us 27 Latency=486.669ps Throughput=1.91367Gi/s
BM_BitStreamerJPEG/Stuffed/16777216_median 8166 us 8165 us 27 Latency=486.694ps Throughput=1.91357Gi/s
BM_BitStreamerJPEG/Stuffed/16777216_stddev 3.19 us 3.18 us 27 Latency=189.818fs Throughput=782.75Ki/s
BM_BitStreamerJPEG/Stuffed/16777216_cv 0.04 % 0.04 % 27 Latency=0.04% Throughput=0.04%
BM_BitStreamerJPEG/Unstuffed/16777216_mean 7415 us 7414 us 27 Latency=441.931ps Throughput=2.10739Gi/s
BM_BitStreamerJPEG/Unstuffed/16777216_median 7414 us 7414 us 27 Latency=441.927ps Throughput=2.10741Gi/s
BM_BitStreamerJPEG/Unstuffed/16777216_stddev 0.272 us 0.268 us 27 Latency=15.9963fs Throughput=79.9773Ki/s
BM_BitStreamerJPEG/Unstuffed/16777216_cv 0.00 % 0.00 % 27 Latency=0.00% Throughput=0.00%
Comparing bench/librawspeed/io/BitStreamerJPEGBenchmark-old to bench/librawspeed/io/BitStreamerJPEGBenchmark
Benchmark Time CPU Time Old Time New CPU Old CPU New
-------------------------------------------------------------------------------------------------------------------------------------------
BM_BitStreamerJPEG/Stuffed/16777216_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
BM_BitStreamerJPEG/Stuffed/16777216_mean -0.0115 -0.0115 8260 8165 8260 8165
BM_BitStreamerJPEG/Stuffed/16777216_median -0.0114 -0.0114 8260 8166 8260 8165
BM_BitStreamerJPEG/Stuffed/16777216_stddev +0.1457 +0.1436 3 3 3 3
BM_BitStreamerJPEG/Stuffed/16777216_cv +0.1590 +0.1569 0 0 0 0
BM_BitStreamerJPEG/Unstuffed/16777216_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
BM_BitStreamerJPEG/Unstuffed/16777216_mean -0.0044 -0.0044 7448 7415 7447 7414
BM_BitStreamerJPEG/Unstuffed/16777216_median -0.0044 -0.0044 7448 7414 7447 7414
BM_BitStreamerJPEG/Unstuffed/16777216_stddev +0.0029 +0.1265 0 0 0 0
BM_BitStreamerJPEG/Unstuffed/16777216_cv +0.0074 +0.1315 0 0 0 0
OVERALL_GEOMEAN -0.0080 -0.0080 0 0 0 0

Merge remote-tracking branch 'upstream/pr/624' into develop

  • upstream/pr/624: Move Hasselblad CFV-50c as alias

Following up from https://github.com/darktable-org/rawspeed/pull/622

I haven't added the vanilla 3FR models, as they're anyway not supported yet.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (738fc19) 60.58% compared to head (bb39607) 60.58%.

@@ Coverage Diff @@
## develop #624 +/- ##
========================================
 Coverage 60.58% 60.58% 
========================================
 Files 265 265 
 Lines 15844 15844 
 Branches 2042 2042 
========================================
 Hits 9599 9599 
 Misses 6122 6122 
 Partials 123 123 
Flag Coverage Δ
benchmarks 10.59% <ø> (ø)
integration 45.55% <ø> (ø)
linux 57.00% <ø> (ø)
macOS 24.29% <ø> (ø)
rpu_u 45.55% <ø> (ø)
unittests 21.57% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

raw.pixls.us$ cat rstest.log 
--- "./Hasselblad/Hasselblad CFV-50c/B0002648.fff.hash" 2024-01-26 19:26:10.408177046 +0300
+++ "./Hasselblad/Hasselblad CFV-50c/B0002648.fff.hash.failed" 2024-01-26 19:26:53.464840501 +0300
@@ -5 +5 @@
-canonical_model: CFV-50c
+canonical_model: H5D-50c

Aha, so it doesn't affect the actual make/model reported. That's good to know i guess.

@kmilos thank you.

Merge pull request #623 from LebedevRI/next

Extract "adt/Bit.h" out of "common/Common.h"

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (2d7814b) 63.93% compared to head (a3d5b08) 63.93%.

@@ Coverage Diff @@
## main #2908 +/- ##
=======================================
 Coverage 63.93% 63.93% 
=======================================
 Files 104 104 
 Lines 22400 22400 
 Branches 10877 10877 
=======================================
 Hits 14322 14322 
 Misses 5854 5854 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Failure is unrelated.

I can tell, but never seen it before... Any ideas?

Hasselblad H5D-50c: tighten crop

It's supposed to be identical to the one on Hasselblad CFV-50c, and at least on the left side it's clearly was undercropped.

Merge remote-tracking branch 'upstream/pr/622' into develop

  • upstream/pr/622: Added support for camera back "Hasselblad CFV-50c" (100% equivalent to the H5D-50c)

Added camera back "Hasselblad CFV-50c" which is completely equivalent to the "Hasselblad H5D-50c" (also tested by changing camera model on exif file from H5D-50c to CFV-50c).

Is this the same camera as Hasselblad CFV-50c/200, for which the two samples were just uploaded? If so, the names don't match: RawSpeed:Unable to find camera in database: 'Hasselblad' 'Hasselblad CFV-50c/200' ''

Is this the same camera as Hasselblad CFV-50c/200, for which the two samples were just uploaded? If so, the names don't match: RawSpeed:Unable to find camera in database: 'Hasselblad' 'Hasselblad CFV-50c/200' ''

I don't think so. EXIF camera model on CFV-50c FFF raw files says "Hasselblad CFV-50c" and not "Hasselblad CFV-50c/200". Such raw files don't load on Darktable. If I change the EXIF camera model with an exif editor to "Hasselblad H5D-50c" the raw files is loaded correctly on Darktable.

Honestly, I don't think a camera model named "Hasselblad CFV-50c/200" even exist(ed) on Hasselblad portfolio.

I did not load any example up until now. If needed, where do I have to upload sample RAW files?

I see. Then please contribute the full sample set to https://raw.pixls.us/ As far as i can tell from camera's PDF manual, there is only a single RAW format, so a single sample is enough. Please, make it something like a bright daylight landscape, not dim indoors shot.

Also, please be sure to upload the file manually copied straight from the card, without any modifications whatsoever by anything.

@mristuccia thank you!

Also, please be sure to upload the file manually copied straight from the card, without any modifications whatsoever by anything.

@mristuccia thank you!

Sure! Just uploaded. Hope this is what you need. Thank you for the support!

@mristuccia thank you for contributing the sample!

@mristuccia thank you!

@LebedevRI thank you for the support. Hope I can use Darktable with my CFV-50c raw files soon. 😊

Well, if you can build dt yourself, you just need to update the src/external/rawspeed to use develop branch, and locally apply https://github.com/darktable-org/darktable/pull/16185 so it builds.

I'd much rather we went w/ aliases for Hasselblads like we did in https://github.com/darktable-org/rawspeed/pull/621 etc. Only not clear if this should be an alias for H5D-50c or X1D...

Also note that the model string usually changes depending whether it is the .3FR raw from the card directly or .FFF raw via Phocus software tethered transfer.

This PR and the uploaded sample only cover the .FFF option.

I'd much rather we went w/ aliases for Hasselblads like we did in #621 etc. Only not clear if this should be an alias for H5D-50c or X1D...

Definitely not the X1D. X1D is more the equivalent of the CFV-50c Mk II (or vice versa). H5D-50c and CFV-50c came out from the same project. One is a general purpose digital back which can be attached to the old 500 series cameras. The H5D-50c is the official digital back equivalent for the newest autofocus H series models. Internally they should be the same AFAIK, but I would not alias one into the other as they are two different products in the Hasselblad Catalog.

Also note that the model string usually changes depending whether it is the .3FR raw from the card directly or .FFF raw via Phocus software tethered transfer.

This PR and the uploaded sample only cover the .FFF option.

3FR files cannot be edited in the standard Hasselblad Phocus RAW developer, they need to be imported to be edited. The import procedure turns them into FFFs, which soon after the import/tether are still considered untouched originals. So, by Hasselblad design 3FRs should not be used for RAW editing, but if you think this design rule should be ignored and 3FRs should be covered as well, please explain to me what I need to do and I'll try to submit a new PR for that.

I'd much rather we went w/ aliases for Hasselblads like we did in #621 etc. Only not clear if this should be an alias for H5D-50c or X1D...

I mean, it's not like things are set in stone afterwards.

...

I guess, you need to take the https://raw.pixls.us/getfile.php/7199/nice/Hasselblad%20-%20Hasselblad%20CFV-50c%20-%2016bit%20(4:3).fff, import into whatever software, and then upload the resulting "imported" "raw" again to RPU, for a start.

Definitely not the X1D. X1D is more the equivalent of the CFV-50c Mk II (or vice versa).

CVF II 50C is based on X1D II 50C AFAICT, not the original X1D. But all three seem to have the same sensor.

H5D-50c and CFV-50c came out from the same project. One is a general purpose digital back which can be attached to the old 500 series cameras. The H5D-50c is the official digital back equivalent for the newest autofocus H series models.

Thanks for the background!

Internally they should be the same AFAIK, but I would not alias one into the other as they are two different products in the Hasselblad Catalog.

Oh, you'd still see them as separate models in darktable, it's just that internal technical debt in maintaining rawspeed database, dt noise profiles etc. is reduced.

3FR files cannot be edited in the standard Hasselblad Phocus RAW developer, they need to be imported to be edited. The import procedure turns them into FFFs, which soon after the import/tether are still considered untouched originals. So, by Hasselblad design 3FRs should not be used for RAW editing, but if you think this design rule should be ignored and 3FRs should be covered as well, please explain to me what I need to do and I'll try to submit a new PR for that.

AFAICT this is more of an recommendation than a rule. Adobe Camera Raw certainly lists 3FR as supported for example, and we seem to have no problem loading either 3FR or FFF for the supported models in rawspeed and dt.

For starters, it would be good to also upload the 3FR that was the source for the the FFF you have already uploaded (or upload a new matching pair).

@LebedevRI I think the other Hasselblad - Hasselblad CFV-50c/200 - 16bit (4:3).3FR sample we have came from attaching the back to a 200 series body as opposed to the 500 one.

So, if your 3FR comes out as CFV-50c/500 (I am indeed getting Exif hits on a web search), these are the changes I propose:

diff --git a/data/cameras.xml b/data/cameras.xml
index 569bd69d..b557bb7d 100644
--- a/data/cameras.xml
+++ b/data/cameras.xml
@@ -17286,6 +17286,11 @@

+

+

Hasselblad CFV-50c
+

Hasselblad CFV-50c/200
+

Hasselblad CFV-50c/500
+

4932 -835 141
@@ -17294,24 +17299,6 @@

-

-

Hasselblad 50-15-Coated5
-

-

RED
-

GREEN
-

GREEN
-

BLUE
-

-

-

-

-

-

4932 -835 141
-

-4878 11868 3437
-

-1138 1961 7067
-

-

-

Hasselblad 22-Uncoated

@LebedevRI Maybe add a "Hasselblad: Both 3FR and FFF" note on RPU front page as well ?

Btw, looks like that older Hasselblad - Hasselblad CFV-50c/200 - 16bit (4:3).3FR sample might be corrupted? The conversion to DNG works although...

Btw, looks like that older Hasselblad - Hasselblad CFV-50c/200 - 16bit (4:3).3FR sample might be corrupted? The conversion to DNG works although...

Yes, it doesn't decode as uncompressed either. I've deleted it.

@LebedevRI Maybe add a "Hasselblad: Both 3FR and FFF" note on RPU front page as well ?

https://github.com/pixlsus/raw/commit/858e8358c271728d460a80cac21763642ad9881e

Btw, looks like that older Hasselblad - Hasselblad CFV-50c/200 - 16bit (4:3).3FR sample might be corrupted? The conversion to DNG works although...

Yes, it doesn't decode as uncompressed either. I've deleted it.

Hm, didn't have access to the uncompressed one... The ljpeg failure could be perhaps also related to https://github.com/darktable-org/rawspeed/issues/144?

I was getting this:

ERROR: [rawspeed] C:/msys64/home/kmilos/rawspeed/src/librawspeed/decoders/RawDecoder.cpp:337: rawspeed::RawImage rawspeed::RawDecoder::decodeRaw(): C:/msys64/home/kmilos/rawspeed/src/librawspeed/io/Buffer.h:80: rawspeed::Buffer rawspeed::Buffer::getSubView(size_type, size_type) const: Buffer overflow: image file may be truncated

and I can also reproduce it on H5D-50c 3FR samples from DPR (maybe one more hint this is indeed the same sensor+processor as in CFV-50c).

@kmilos there is an error in the alias: the id must be corrected as indicated here below.

And, by the way, I was able to identify all aliases for the .3FR files generated by the CFV-50c:

Hasselblad CFV-50c <-- already there, must be corrected as indicated here
CFV-50c/Flash Sync
CFV-50c/SWC
CFV-50c/200
CFV-50c/500
CFV-50c/Schneider
CFV-50c/LensCtrl S
CFV-50c/Winder CW
CFV-50c/ELD
CFV-50c/ELX
CFV-50c/Pinhole

@kmilos if you want you can add all of them as well. This will guarantee the 3FR compatibility in addition to the FFF one.

I have this digital back and I've understood where the information after the "/" is coming from. Those are the different camera systems this back is compatible with and they can be chosen from its settings menu.

I remain sceptic on the fact that editing a 3FR is not intended by design by Hasselblad though.

Define "editing"? I guess the question is, what modifications are done to the decoded 3FR before it is re-saved as FFF? If there aren't any, then there is zero difference of what the user should actually process.

Define "editing"? I guess the question is, what modifications are done to the decoded 3FR before it is re-saved as FFF? If there aren't any, then there is zero difference of what the user should actually process.

Phocus, the official Hasselblad RAW editor, does not allow to do apply any development to the 3FR file. Only conversion to FFF is possible. So any action that causes the creation of a sidecar .xmp file to the 3FR should be avoided if we want to stick to the original Vendor's intent.

does not allow to do apply any development to the 3FR file.

That does not, at all, mean that no changes are done to the data. They, e.g., could be forcibly applying flat field correction, or some bad pixel removal.

So any action that causes the creation of a sidecar .xmp file to the 3FR should be avoided if we want to stick to the original Vendor's intent.

Well, that is rather, err, . The vendor's intent is for consumers to use the software provided by the vendor, so let's not think about vendor intent too much :)

Well, that is rather, err, . The vendor's intent is for consumers to use the software provided by the vendor, so let's not think about vendor intent too much :)

Anyway, I'm not able to open the 3FR files despite all the alias are there and despite I've double checked the model name directly from the EXIF data. So, there is something in the 3FR files that is going against your desire not to think about the vendor intent too much. 😅

Anyway, I'm not able to open the 3FR files despite all the alias are there and despite I've double checked the model name directly from the EXIF data.

Which 3FR in particular?

The inability to open 3FR files for these older models is not due to aliases, but entirely due to https://github.com/darktable-org/rawspeed/issues/144

3FR files from newer models like CFV II 50C and X1D 50C II open just fine.

When converting to FFF, this is (losslessly again) re-compressed to a more compliant stream for one. We don't know if there is any other processing on top.

Which 3FR in particular?

Any 3FR coming from my CFV-50c (mark I).

The inability to open 3FR files for these older models is not due to aliases, but entirely due to #144

3FR files from newer models like CFV II 50C and X1D 50C II open just fine.

When converting to FFF, this is (losslessly again) re-compressed to a more compliant stream for one. We don't know if there is any other processing on top.

The model name in the alias is correct, so most probably it is the #144.

For now I'll stick with the process of generating the FFF file by importing the 3FR in Phocus and then I'll develop the FFF file on Darkroom.

@mristuccia Could you please one of the problematic(!!!) raws, convert it to FFF, and contribute BOTH of the files to RPU please?

Bonus points if the sample is well-lit (low-iso daylight landscape) and is horizontal. (Extra bonus points if said raw is fresh, just from the camera, via a good card+cable)

@mristuccia Could you please one of the problematic(!!!) raws, convert it to FFF, and contribute BOTH of the files to RPU please?

Bonus points if the sample is well-lit (low-iso daylight landscape) and is horizontal. (Extra bonus points if said raw is fresh, just from the camera, via a good card+cable)

Done.

  • Original 3FR straight from the camera card:
20200215_BERLIN_OldNew_WestHafen_B0000420.3FR
  • Corresponding imported FFF file from Hasselblad Phocus software:
20200215_BERLIN_OldNew_WestHafen_B0000420.FFF
  • The 3FR file model name is "CFV-50c/Flash Sync" (that was my setting in the camera when I've shot it).

  • The FFF file model name "Hasselblad CFV-50c" as expected. (Please correct the alias of your merge request as indicated above, otherwise the FFF file won't open correctly in Darktable).

Now, where are my bonus points? 😀

P.S. I've spotted a blank character in front of the model name in the exif data of the 3FR file. It is like " CFV-50c/Flash Sync" rather than "CFV-50c/Flash Sync".

But even including such leading blank character in the alias it does not work.

Now, where are my bonus points? 😀

Alright, something like this might suffice, perhaps? :)

image

Added support for camera back "Hasselblad CFV-50c" (100% equivalent to the H5D-50c)

Added camera back "Hasselblad CFV-50c" which is completely equivalent to the "Hasselblad H5D-50c" (also tested by changing camera model on exif file from H5D-50c to CFV-50c).

Merge remote-tracking branch 'upstream/pr/530' into develop

  • upstream/pr/530: Fujifilm GFX100 II support

Merge pull request #2157 from BAGELGENESIS/master

Integrated Mamiya 35mm 22.0-3.5 and Schneider LS 80mm 2.8 into 6x6.xml

Merge remote-tracking branch 'upstream/pr/619' into develop

  • upstream/pr/619: Samsung NX3300 support

Added Schneider LS 80mm 2.8 and Mamiya 35mm f/22.0-3.5 to 6x6.xml

Merge pull request #620 from LebedevRI/bitstream-roundtrip

BitVacuumer: BitStream rountripping

BitStreamerJPEGBenchmark: JPEGUnstuffedByteStreamGenerator: init numBytesGenerated in ctor

Why is CI not complaining about this?

BitStreamerCacheLeftInRightOut::peek(): use extractLowBits()

We know we never want 0 bits, but we could want 32 bits, so extractLowBits() is more appropriate. Both result in bzhi instruction on x86 BMI2.

Merge pull request #621 from kmilos/kmilos/hassy_cfvii50c

Add Hasselblad CFV II 50C aliases

Add Hasselblad CFV II 50C aliases

Also tweak X1DM2-50c to match model naming scheme

Also tweak X1DM2-50c to match model naming scheme

Looks like you need to at least change to.

@kmilos thank you!

Hm, looks like dt doesn't like the / in model name, it hangs on loading that DPR .3FR sample...

Trying to figure out if it's due to rawspeed or dt thumbnail loading... @TurboGit

Never mind - it was the new "unknown-no-samples" stuff, can't mix that cameras.xml w/ dt master builds... 😊

If I just cherry pick these changes, it hangs no more.

This is an automatic backport of pull request #2897 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 14 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (1eeb4ff) 64.03%. Report is 30 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/value.cpp 86.36% 0 Missing and 3 partials :warning:
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
src/jp2image.cpp 50.00% 1 Missing :warning:
src/jpgimage.cpp 75.00% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## 0.28.x #2907 +/- ##
==========================================
+ Coverage 63.99% 64.03% +0.03% 
==========================================
 Files 103 104 +1 
 Lines 22338 22371 +33 
 Branches 10821 10834 +13 
==========================================
+ Hits 14296 14325 +29 
- Misses 5818 5820 +2 
- Partials 2224 2226 +2 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

remove more regex

Signed-off-by: Rosen Penev

This is an automatic backport of pull request #2904 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 11 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (3f987b9) 64.00%. Report is 28 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
src/jp2image.cpp 50.00% 1 Missing :warning:
src/jpgimage.cpp 75.00% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## 0.28.x #2906 +/- ##
=======================================
 Coverage 63.99% 64.00% 
=======================================
 Files 103 104 +1 
 Lines 22338 22360 +22 
 Branches 10821 10833 +12 
=======================================
+ Hits 14296 14311 +15 
- Misses 5818 5823 +5 
- Partials 2224 2226 +2 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

This is an automatic backport of pull request #2903 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 11 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (b6abffd) 64.00%. Report is 28 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
src/jp2image.cpp 50.00% 1 Missing :warning:
src/jpgimage.cpp 75.00% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## 0.28.x #2905 +/- ##
=======================================
 Coverage 63.99% 64.00% 
=======================================
 Files 103 104 +1 
 Lines 22338 22360 +22 
 Branches 10821 10833 +12 
=======================================
+ Hits 14296 14311 +15 
- Misses 5818 5823 +5 
- Partials 2224 2226 +2 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

darktable-org

/

rawspeed

Public

Notifications

Fork 116

Star 324

Actions: darktable-org/rawspeed

Actions

CI

Loading...

Sorry, something went wrong.

Create status badge

CI.yml

856 workflow runs

856 workflow runs

Could not load branches

Nothing to show

BitVacuumer: BitStream rountripping

CI #1054:

Pull request #620

synchronize by LebedevRI

January 24, 2024 22:58

51m 2s

LebedevRI:bitstream-roundtrip

LebedevRI:bitstream-roundtrip

January 24, 2024 22:58

51m 2s

View #620

View workflow file

BitVacuumer: BitStream rountripping

CI #1053:

Pull request #620

opened by LebedevRI

January 24, 2024 22:16

42m 20s

LebedevRI:bitstream-roundtrip

LebedevRI:bitstream-roundtrip

January 24, 2024 22:16

42m 20s

View #620

View workflow file

Samsung NX3300 support

CI #1052:

Pull request #619

opened by kmilos

January 23, 2024 12:55

54m 11s

kmilos:kmilos/sammy_nx3300

kmilos:kmilos/sammy_nx3300

January 23, 2024 12:55

54m 11s

View #619

View workflow file

Merge remote-tracking branch 'upstream/pr/618' into develop

CI #1051:

Commit dbb3a17

pushed by LebedevRI

January 23, 2024 12:37

50m 40s

develop

develop

January 23, 2024 12:37

50m 40s

View workflow file

Clean up some Canon aliases

CI #1050:

Pull request #618

opened by kmilos

January 23, 2024 10:07

50m 26s

kmilos:kmilos/aliases

kmilos:kmilos/aliases

January 23, 2024 10:07

50m 26s

View #618

View workflow file

Add some isolated unknown camera models

CI #1049:

Pull request #617

opened by kmilos

January 23, 2024 09:29

50m 27s

kmilos:kmilos/unknowns

kmilos:kmilos/unknowns

January 23, 2024 09:29

50m 27s

View #617

View workflow file

Merge pull request #616 from LebedevRI/bitstreamer

CI #1048:

Commit bfd86ac

pushed by LebedevRI

January 23, 2024 02:32

52m 5s

develop

develop

January 23, 2024 02:32

52m 5s

View workflow file

Rename BitPump/BitStream to BitStreamer

CI #1047:

Pull request #616

synchronize by LebedevRI

January 23, 2024 02:17

48m 53s

LebedevRI:bitstreamer

LebedevRI:bitstreamer

January 23, 2024 02:17

48m 53s

View #616

View workflow file

Rename BitPump/BitStream to BitStreamer

CI #1046:

Pull request #616

opened by LebedevRI

January 23, 2024 02:14

2m 48s

LebedevRI:bitstreamer

LebedevRI:bitstreamer

January 23, 2024 02:14

2m 48s

View #616

View workflow file

Merge pull request #615 from LebedevRI/next

CI #1045:

Commit 4032aa1

pushed by LebedevRI

January 23, 2024 01:04

50m 41s

develop

develop

January 23, 2024 01:04

50m 41s

View workflow file

Untangle some of calls to Buffer::getAsArray1DRef()

CI #1044:

Pull request #615

synchronize by LebedevRI

January 23, 2024 00:12

54m 50s

LebedevRI:next

LebedevRI:next

January 23, 2024 00:12

54m 50s

View #615

View workflow file

Untangle some of calls to Buffer::getAsArray1DRef()

CI #1043:

Pull request #615

synchronize by LebedevRI

January 22, 2024 23:53

18m 41s

LebedevRI:next

LebedevRI:next

January 22, 2024 23:53

18m 41s

View #615

View workflow file

Untangle some of calls to Buffer::getAsArray1DRef()

CI #1042:

Pull request #615

synchronize by LebedevRI

January 22, 2024 23:45

8m 43s

LebedevRI:next

LebedevRI:next

January 22, 2024 23:45

8m 43s

View #615

View workflow file

Untangle some of calls to Buffer::getAsArray1DRef()

CI #1041:

Pull request #615

synchronize by LebedevRI

January 22, 2024 23:37

9m 5s

LebedevRI:next

LebedevRI:next

January 22, 2024 23:37

9m 5s

View #615

View workflow file

Untangle some of calls to Buffer::getAsArray1DRef()

CI #1040:

Pull request #615

opened by LebedevRI

January 22, 2024 23:33

4m 11s

LebedevRI:next

LebedevRI:next

January 22, 2024 23:33

4m 11s

View #615

View workflow file

Merge pull request #614 from LebedevRI/case-insensitive-fs

CI #1039:

Commit 600692b

pushed by LebedevRI

January 22, 2024 14:25

51m 57s

develop

develop

January 22, 2024 14:25

51m 57s

View workflow file

Unbreak build on case-insensitive fs

CI #1038:

Pull request #614

opened by LebedevRI

January 22, 2024 14:14

45m 43s

LebedevRI:case-insensitive-fs

LebedevRI:case-insensitive-fs

January 22, 2024 14:14

45m 43s

View #614

View workflow file

Merge pull request #613 from LebedevRI/next

CI #1037:

Commit 0c19107

pushed by LebedevRI

January 22, 2024 02:38

51m 4s

develop

develop

January 22, 2024 02:38

51m 4s

View workflow file

BitStream: sizes are signed!!!11, sprinkle invariants

CI #1036:

Pull request #613

opened by LebedevRI

January 22, 2024 02:14

47m 44s

LebedevRI:next

LebedevRI:next

January 22, 2024 02:14

47m 44s

View #613

View workflow file

Merge pull request #612 from LebedevRI/next

CI #1035:

Commit 4ec395a

pushed by LebedevRI

January 21, 2024 23:13

44m 16s

develop

develop

January 21, 2024 23:13

44m 16s

View workflow file

BitPumpJPEG::fillCache(): make it -7% faster

CI #1034:

Pull request #612

opened by LebedevRI

January 21, 2024 22:53

40m 59s

LebedevRI:next

LebedevRI:next

January 21, 2024 22:53

40m 59s

View #612

View workflow file

Merge pull request #611 from LebedevRI/next

CI #1033:

Commit 137ddf5

pushed by LebedevRI

January 21, 2024 17:54

44m 18s

develop

develop

January 21, 2024 17:54

44m 18s

View workflow file

Add benchmark for LJpeg byte stream unstuffing

CI #1032:

Pull request #611

synchronize by LebedevRI

January 21, 2024 17:37

40m 35s

LebedevRI:next

LebedevRI:next

January 21, 2024 17:37

40m 35s

View #611

View workflow file

Add benchmark for LJpeg byte stream unstuffing

CI #1031:

Pull request #611

synchronize by LebedevRI

January 21, 2024 17:09

28m 24s

LebedevRI:next

LebedevRI:next

January 21, 2024 17:09

28m 24s

View #611

View workflow file

Add benchmark for LJpeg byte stream unstuffing

CI #1030:

Pull request #611

synchronize by LebedevRI

January 21, 2024 16:50

19m 23s

LebedevRI:next

LebedevRI:next

January 21, 2024 16:50

19m 23s

View #611

View workflow file

Previous 1 2 3 4 5 … 34 35 Next

You can’t perform that action at this time.

Codecov Report

Attention: 100 lines in your changes are missing coverage. Please review.

Comparison is base (dbb3a17) 59.21% compared to head (b2682f6) 60.60%. Report is 2 commits behind head on develop.

Files Patch % Lines
fuzz/librawspeed/io/BitVacuumerRoundtrip.cpp 0.00% 69 Missing :warning:
src/librawspeed/io/BitVacuumer.h 54.16% 10 Missing and 1 partial :warning:
src/librawspeed/common/Common.h 38.46% 8 Missing :warning:
src/librawspeed/io/BitVacuumerJPEG.h 80.00% 1 Missing and 1 partial :warning:
src/librawspeed/io/BitVacuumerLSB.h 75.00% 1 Missing and 1 partial :warning:
src/librawspeed/io/BitVacuumerMSB.h 75.00% 1 Missing and 1 partial :warning:
src/librawspeed/io/BitVacuumerMSB16.h 80.00% 1 Missing and 1 partial :warning:
src/librawspeed/io/BitVacuumerMSB32.h 75.00% 1 Missing and 1 partial :warning:
src/librawspeed/io/ByteStream.h 0.00% 2 Missing :warning:
@@ Coverage Diff @@
## develop #620 +/- ##
===========================================
+ Coverage 59.21% 60.60% +1.39% 
===========================================
 Files 251 263 +12 
 Lines 15061 15844 +783 
 Branches 2013 2042 +29 
===========================================
+ Hits 8918 9602 +684 
- Misses 6026 6119 +93 
- Partials 117 123 +6 
Flag Coverage Δ
benchmarks 10.59% <1.15%> (-0.53%) :arrow_down:
integration 45.56% <2.24%> (-0.85%) :arrow_down:
linux 57.02% <60.86%> (+0.09%) :arrow_up:
macOS 24.28% <84.01%> (+3.22%) :arrow_up:
rpu_u 45.56% <2.24%> (-0.85%) :arrow_down:
unittests 21.57% <86.94%> (+3.38%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #2156 from BAGELGENESIS/master

Different accuracy corrections for previous PR #2154

Update 6x6.xml to include display name for Mamiya 120mm 4.0 and Mamiya 150mm 3.5

Exiftool outputs the aperture as range, which has been re-integrated. In my tests, the lens correction module of darktable did have trouble with that, thus it was omitted the first time.

Update slr-schneider.xml to include display name for Schneider LS 80mm 2.8

Using exiftool tables and some samples from web search.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (57d9d2f) 63.90% compared to head (6899886) 63.90%.

@@ Coverage Diff @@
## main #2904 +/- ##
=======================================
 Coverage 63.90% 63.90% 
=======================================
 Files 104 104 
 Lines 22389 22389 
 Branches 10876 10876 
=======================================
 Hits 14308 14308 
 Misses 5857 5857 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@mergify backport 0.28.x

backport 0.28.x

✅ Backports have been created

Cf. https://helpx.adobe.com/content/dam/help/en/photoshop/pdf/DNG_Spec_1_7_1_0.pdf @boardhead

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (57d9d2f) 63.90% compared to head (ce8e84d) 63.90%.

@@ Coverage Diff @@
## main #2903 +/- ##
=======================================
 Coverage 63.90% 63.90% 
=======================================
 Files 104 104 
 Lines 22389 22389 
 Branches 10876 10876 
=======================================
 Hits 14308 14308 
 Misses 5857 5857 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@mergify backport 0.28.x

backport 0.28.x

✅ Backports have been created

Added capturing camera, aspect ratio and maker name to the following lenses: - Schneider LS 80mm 2.8 - Mamiya 150mm 3.5 - Mamiya 120mm 4.0

Addresses #2154.

Added IQ180 as capturing camera for Schneider LS 80mm 2.8 in slr-schneider.xml

Added IQ180 as capturing camera for lens data of Mamiya 120mm 4.0 and Mamiya 150mm 3.5 in 6x6.xml

Added aspect ratio to Schneider LS 80mm 2.8 in slr-schneider.xml

Added maker to lens name and aspect ratio for Mamiya 120mm 4.0 and Mamiya 150mm 3.5 6x6.xml

Merge pull request #2154 from BAGELGENESIS/master

Calibration data for various Phase One XF/Mamiya 645 lenses

I do not know how to integrate my calibration data for the Mamiya 35mm f/3.5, captured on the IQ180, without conflicting with – or alternatively overwriting – the existing entry. Any advice would be appreciated.

Thanks, BAGELGENESIS

Mamiya 35mm.txt

@BAGELGENESIS I don't think there will be a conflict. The model name for your lens should be 35mm f/22.0-3.5 so there won't be a conflict with the existing lens which is Mamiya 35mm f/3.5

Understood, I will integrate it in a PR.

Added IQ180 and IQ140 digital backs. Added the following lenses: - Mamiya 120mm f/4.0 - Mamiya 150mm f/3.5 - Schneider LS 80mm f/2.8

I think there is a mistake in the profiles. The aspect ratio is not specified, see here. It should be specified when creating the profile. I don't know whether the missing information in the different applications leads to incorrect corrections. If no information is provided, lensfun assumes an aspect ratio of 3:2.

The model name should include "Mamiya" like the other Mamiya lenses. You can use e.g. Mamiya 20mm f/4.0 on an additional line.

It is sometimes helpful if the lens profile contains information about which camera was used e.g. ``

Good morning,

I have addressed the mentioned issues in a new PR #2156. Have you received the example data I uploaded yesterday?

Thanks, BAGELGENESIS

I found the pictures, thank you very much. I looked at your changes in the new PR. It is not allowed to use the ` identifier twice. [This identifier](https://wilson.bronger.org/lensfun/el_lens.html) is for the automatic lens detection. Please use the identifierMamiya ....`. This identifier is then only responsible for the output on the screen and the lensfun coverage website.

The line with information about the camera used should be part of the calibration data obtained with it. The calibration data of a lens profile could have been created with different camera models.

Here is an example from the database:


Nikon
 Nikon AF-S Nikkor 16-35mm f/4G ED VR
 Nikkor AF-S 16-35mm f/4G ED VR
 Nikon F AF
 1

Understood. I have updated my repo, the changes should come up in the PR momentarily. Furthermore, I have changed the model name to include the aperture range, as exiftool outputs it. The reason it was omitted the first time, is that in my tests, darktable had issues, if the model name included the aperture as a range.

Input for this topic would be appreciated.

In my OS (Debian Bookworm) lensfun does not perform picture correction under Darktable if the aperture range is included in the model name. If I paste the original model name back without the aperture range, the correction is carried out. However, I have to select all the lenses manually. Seems that Exiv2 has problems with the Phase One exif data. Does the picture correction work on your OS with the aperture range in the model name?

I have the exact same issue and the exact same fix for said issue with exiv2 under pop_os. The support for the proprietary IIQ format is hit and miss; in general. Exiv2 does not recognise any lens at all, while exiftool outputs the lens ID with the aperture range. I excluded it at first, as a band-aid fix for this issue, however that was treating the symptom of using a single piece of software, rather than the cause; hence my change of heart.

Since yesterday, I have experimented with taking my lens correction verification photos into Capture One (in a VM), exporting them as DNGs and re-importing them into Darktable. When using DNGs, Darktable (and by extension exiv2) will actually recognise the lens automatically, with the range as exiftool outputs it and correct it. Thus, the bug where darktable will not correct a lens, with an aperture range, is the result of exiv2 being unable to recognise the lens in the IIQ format, even if manually selected.

With all of that being said, what should the course of action for lensfun be? Ensure universal compatability and deliberately include “wrong” data, or focus on accuracy and let the downstream projects handle it?

Thank you for the hints! After I converted the images with the Adobe DNG converter, the automatic correction also works in my system. We should leave the model names as they are now. The problem lies with exiv2 and the IIQ file format. I would still like to change one thing. The profile for the Schneider lens should not be in the slr-schneider.xml. I think it should be in the 6x6.xml. It's a medium format lens, isn't it?

Yes, it is a medium format lens; I will make a new PR.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (dbb3a17) 59.21% compared to head (b0009cd) 59.21%.

@@ Coverage Diff @@
## develop #619 +/- ##
========================================
 Coverage 59.21% 59.21% 
========================================
 Files 251 251 
 Lines 15061 15061 
 Branches 2013 2013 
========================================
 Hits 8918 8918 
 Misses 6026 6026 
 Partials 117 117 
Flag Coverage Δ
benchmarks 11.11% <ø> (ø)
integration 46.40% <ø> (ø)
linux 56.93% <ø> (ø)
macOS 21.06% <ø> (ø)
rpu_u 46.40% <ø> (ø)
unittests 18.18% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

I suppose it does look right ish.

Amazing, each time i start darktable, it becomes visibly worse since the last time.

@kmilos thank you!

Merge remote-tracking branch 'upstream/pr/618' into develop

  • upstream/pr/618: Clean up some Canon aliases

Merge remote-tracking branch 'upstream/pr/617' into develop

  • upstream/pr/617: Add some isolated unknown camera models

Is your feature request related to a problem?

exiv2 doesn't recognize the lens Yongnuo YN 35mm f/2. The lens is recognized by exiftool as Yongnuo YN 35mm f/2 and is included in the lensfun database under this name.

Describe the solution you would like

Would be wonderful, if you may add recognition of this lens to exiv2.

Describe alternatives you have considered

As a work-around I created an exiv2.ini

Desktop

  • OS: Debian bookworm
  • Exiv2 version and source: 0.27.6 and 1.00.0.9
  • Any software using exiv2 and source: darktable 4.6.0

Additional context

I attached a jpeg picture taken with Canon EOS 5D Mark II. The lens is also available with a Nikon F mount

IMG_5121

Seems to be present in the Canon tables already: https://github.com/Exiv2/exiv2/blob/v0.27.6/src/canonmn_int.cpp#L1900

As there are several lenses w/ this ID, it's possible that dealiasing for Canon is/was broken. Please search the issues, there are several similar reports about this.

For Nikon, a separate sample is needed.

Also, I don't see the problem on this sample w/ exiv2 0.28.1:

Exif.CanonCs.LensType 198 Yongnuo YN 35mm f/2

Thanks for the info! Unfortunately, I can't provide a picture for Nikon mount.

FYI: https://github.com/Exiv2/exiv2/pull/2904

Add Yongnuo YN 35mm f/2, crop 1.0

Upload ba24cd

Follow the normal scheme including "EOS", cf. https://global.canon/en/c-museum/camera.html?s=dslr

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (bfd86ac) 59.21% compared to head (46df710) 59.21%.

@@ Coverage Diff @@
## develop #618 +/- ##
========================================
 Coverage 59.21% 59.21% 
========================================
 Files 251 251 
 Lines 15061 15061 
 Branches 2013 2013 
========================================
 Hits 8918 8918 
 Misses 6026 6026 
 Partials 117 117 
Flag Coverage Δ
benchmarks 11.11% <ø> (ø)
integration 46.40% <ø> (ø)
linux 56.93% <ø> (ø)
macOS 21.06% <ø> (ø)
rpu_u 46.40% <ø> (ø)
unittests 18.18% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

I have not verified that this is what ADC calls them, but this looks more consistent. Thank you!

Clean up some Canon aliases

Follow the normal scheme including "EOS", cf. https://global.canon/en/c-museum/camera.html?s=dslr

Merge pull request #616 from LebedevRI/bitstreamer

Rename BitPump/BitStream to BitStreamer

Rename BitPump/BitStream to BitStreamer

It is not at all possible what Pump actually does? Does it push or pull? Same with Stream. Streamer, OTOH, is fairly unambiguous.

Now it's obvious than the inverse would be called BitVacuumer

It is not at all possible what Pump actually does? Does it push or pull? Same with Stream. Streamer, OTOH, is fairly unambiguous.

Now it's obvious than the inverse would be called BitVacuumer

Merge pull request #615 from LebedevRI/next

Untangle some of calls to Buffer::getAsArray1DRef()

Codecov Report

Attention: 69 lines in your changes are missing coverage. Please review.

Comparison is base (600692b) 59.27% compared to head (4b0b5e7) 59.23%.

Files Patch % Lines
fuzz/librawspeed/codes/PrefixCodeDecoder/Dual.cpp 0.00% 15 Missing :warning:
fuzz/librawspeed/codes/PrefixCodeDecoder/Solo.cpp 0.00% 11 Missing :warning:
src/librawspeed/decoders/OrfDecoder.cpp 0.00% 11 Missing :warning:
src/librawspeed/decoders/NefDecoder.cpp 20.00% 8 Missing :warning:
fuzz/librawspeed/decompressors/CrwDecompressor.cpp 0.00% 5 Missing :warning:
...brawspeed/decompressors/HasselbladDecompressor.cpp 0.00% 3 Missing :warning:
src/librawspeed/decoders/CrwDecoder.cpp 75.00% 2 Missing and 1 partial :warning:
src/librawspeed/decompressors/CrwDecompressor.cpp 81.25% 3 Missing :warning:
fuzz/librawspeed/decompressors/Cr2Decompressor.cpp 0.00% 2 Missing :warning:
...zz/librawspeed/decompressors/NikonDecompressor.cpp 0.00% 2 Missing :warning:
... and 5 more
@@ Coverage Diff @@
## develop #615 +/- ##
===========================================
- Coverage 59.27% 59.23% -0.05% 
===========================================
 Files 252 252 
 Lines 15031 15068 +37 
 Branches 2009 2013 +4 
===========================================
+ Hits 8910 8925 +15 
- Misses 6005 6026 +21 
- Partials 116 117 +1 
Flag Coverage Δ
benchmarks 11.10% <1.49%> (-0.04%) :arrow_down:
integration 46.38% <50.00%> (-0.02%) :arrow_down:
linux 56.97% <50.78%> (-0.05%) :arrow_down:
macOS 21.09% <1.55%> (-0.06%) :arrow_down:
rpu_u 46.38% <50.00%> (-0.02%) :arrow_down:
unittests 18.22% <0.00%> (-0.05%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Bump actions/cache from 3 to 4

Bumps actions/cache from 3 to 4. - Release notes - Changelog - Commits


updated-dependencies: - dependency-name: actions/cache dependency-type: direct:production update-type: version-update:semver-major ...

Signed-off-by: dependabot[bot]

Bump mymindstorm/setup-emsdk from 13 to 14

Bumps mymindstorm/setup-emsdk from 13 to 14. - Release notes - Commits


updated-dependencies: - dependency-name: mymindstorm/setup-emsdk dependency-type: direct:production update-type: version-update:semver-major ...

Signed-off-by: dependabot[bot]

Bumps actions/cache from 3 to 4.

Sourced from actions/cache's releases .

v4.0.0

What's Changed

Update action to node20 by @​takost in actions/cache#1284

feat: save-always flag by @​to-s in actions/cache#1242

New Contributors

@​takost made their first contribution in actions/cache#1284

@​to-s made their first contribution in actions/cache#1242

Full Changelog : https://github.com/actions/cache/compare/v3...v4.0.0

v3.3.3

What's Changed

Cache v3.3.3 by @​robherley in actions/cache#1302

New Contributors

@​robherley made their first contribution in actions/cache#1302

Full Changelog : https://github.com/actions/cache/compare/v3...v3.3.3

v3.3.2

What's Changed

Fixed readme with new segment timeout values by @​kotewar in actions/cache#1133

Readme fixes by @​kotewar in actions/cache#1134

Updated description of the lookup-only input for main action by @​kotewar in actions/cache#1130

Change two new actions mention as quoted text by @​bishal-pdMSFT in actions/cache#1131

Update Cross-OS Caching tips by @​pdotl in actions/cache#1122

Bazel example (Take #2 ️⃣) by @​vorburger in actions/cache#1132

Remove actions to add new PRs and issues to a project board by @​jorendorff in actions/cache#1187

Consume latest toolkit and fix dangling promise bug by @​chkimes in actions/cache#1217

Bump action version to 3.3.2 by @​bethanyj28 in actions/cache#1236

New Contributors

@​vorburger made their first contribution in actions/cache#1132

@​jorendorff made their first contribution in actions/cache#1187

@​chkimes made their first contribution in actions/cache#1217

@​bethanyj28 made their first contribution in actions/cache#1236

Full Changelog : https://github.com/actions/cache/compare/v3...v3.3.2

v3.3.1

What's Changed

Reduced download segment size to 128 MB and timeout to 10 minutes by @​kotewar in actions/cache#1129

Full Changelog : https://github.com/actions/cache/compare/v3...v3.3.1

v3.3.0

What's Changed

Bug: Permission is missing in cache delete example by @​kotokaze in actions/cache#1123

... (truncated)

Sourced from actions/cache's changelog .

Releases

3.0.0

Updated minimum runner version support from node 12 -> node 16

3.0.1

Added support for caching from GHES 3.5.

Fixed download issue for files > 2GB during restore.

3.0.2

Added support for dynamic cache size cap on GHES.

3.0.3

Fixed avoiding empty cache save when no files are available for caching. (issue )

3.0.4

Fixed tar creation error while trying to create tar with path as ~/ home folder on ubuntu-latest . (issue )

3.0.5

Removed error handling by consuming actions/cache 3.0 toolkit, Now cache server error handling will be done by toolkit. (PR )

3.0.6

Fixed #809 - zstd -d: no such file or directory error

Fixed #833 - cache doesn't work with github workspace directory

3.0.7

Fixed #810 - download stuck issue. A new timeout is introduced in the download process to abort the download if it gets stuck and doesn't finish within an hour.

3.0.8

Fix zstd not working for windows on gnu tar in issues #888 and #891 .

Allowing users to provide a custom timeout as input for aborting download of a cache segment using an environment variable SEGMENT_DOWNLOAD_TIMEOUT_MINS . Default is 60 minutes.

3.0.9

Enhanced the warning message for cache unavailablity in case of GHES.

3.0.10

Fix a bug with sorting inputs.

Update definition for restore-keys in README.md

... (truncated)

13aacd8 Merge pull request #1242 from to-s/main

53b35c5 Merge branch 'main' into main

65b8989 Merge pull request #1284 from takost/update-to-node-20

d0be34d Fix dist

66cf064 Merge branch 'main' into update-to-node-20

1326563 Merge branch 'main' into main

e718767 Fix format

0122982 Apply workaround for earlyExit

3185ecf Update "only-" actions to node20

25618a0 Bump version

Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


You can trigger Dependabot actions by commenting on this PR: - @dependabot rebase will rebase this PR - @dependabot recreate will recreate this PR, overwriting any edits that have been made to it - @dependabot merge will merge this PR after your CI passes on it - @dependabot squash and merge will squash and merge this PR after your CI passes on it - @dependabot cancel merge will cancel a previously requested merge and block automerging - @dependabot reopen will reopen this PR if it is closed - @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - @dependabot show ignore conditions will show all of the ignore conditions of the specified dependency - @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (cac362c) 63.90% compared to head (3ab8523) 63.90%.

@@ Coverage Diff @@
## main #2901 +/- ##
=======================================
 Coverage 63.90% 63.90% 
=======================================
 Files 104 104 
 Lines 22389 22389 
 Branches 10876 10876 
=======================================
 Hits 14308 14308 
 Misses 5857 5857 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Bumps mymindstorm/setup-emsdk from 13 to 14.

Sourced from mymindstorm/setup-emsdk's releases .

Version 14

Breaking Changes

The default cache key naming scheme was changed from {Emscripten version}-{OS type}-${CPU architecture}-master to {Github workflow name}-{Emscripten version}-{OS type}-${CPU architecture} . If actions-cache-folder is defined, ensure that there are no conflicts with other caches to prevent issues.

Changelog

Add option to override cache key naming scheme (#20 )

Add workflow name to cache key naming scheme (#20 )

Updated dependencies to latest versions

6ab9eb1 v13 -> v14

bb630c3 Update all dependencies to latest versions

7488110 Add workflow ID to cache key and cache key override option (#40 )

See full diff in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


You can trigger Dependabot actions by commenting on this PR: - @dependabot rebase will rebase this PR - @dependabot recreate will recreate this PR, overwriting any edits that have been made to it - @dependabot merge will merge this PR after your CI passes on it - @dependabot squash and merge will squash and merge this PR after your CI passes on it - @dependabot cancel merge will cancel a previously requested merge and block automerging - @dependabot reopen will reopen this PR if it is closed - @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - @dependabot show ignore conditions will show all of the ignore conditions of the specified dependency - @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (cac362c) 63.90% compared to head (fc24754) 63.90%.

@@ Coverage Diff @@
## main #2900 +/- ##
=======================================
 Coverage 63.90% 63.90% 
=======================================
 Files 104 104 
 Lines 22389 22389 
 Branches 10876 10876 
=======================================
 Hits 14308 14308 
 Misses 5857 5857 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #614 from LebedevRI/case-insensitive-fs

Unbreak build on case-insensitive fs

For some reason first searches in current directory, just like it's `""` form, and after c2a68bb4124265eaf3b3dbac910b9cf32213943c, `Cpuid.h` is in the current directory of the compiler for `Cpuid.cpp`, which results in wrong file being included (global not being included)

https://github.com/darktable-org/rawspeed/pull/610#issuecomment-1903750793

It's probably due to -I. compiler flag used somewhere in build options.

{
 "directory": "/home/lebedevri/rawspeed/build-Clang17-release",
 "command": "/usr/bin/clang++-17 -DNDEBUG -I/home/lebedevri/rawspeed/build-Clang17-release/src -I/home/lebedevri/rawspeed/src/librawspeed/common/.. -isystem /home/lebedevri/rawspeed/src/external -Wall -Wextra -Weverything -Wno-c++98-compat -Wno-c++98-compat-pedantic -Wno-c++20-extensions -Wno-padded -Wno-switch-enum -Wno-unused-parameter -Wno-sign-conversion -Wextra-semi -Wframe-larger-than=4096 -Wlarger-than=32768 -Werror -O3 -DNDEBUG -O3 -std=c++20 -fPIC -fvisibility=hidden -fvisibility-inlines-hidden -march=native -g3 -ggdb3 -fopenmp=libomp -fopenmp-version=50 -o src/librawspeed/common/CMakeFiles/rawspeed_common.dir/CpuFeatures.cpp.o -c /home/lebedevri/rawspeed/src/librawspeed/common/CpuFeatures.cpp",
 "file": "/home/lebedevri/rawspeed/src/librawspeed/common/CpuFeatures.cpp",
 "output": "src/librawspeed/common/CMakeFiles/rawspeed_common.dir/CpuFeatures.cpp.o"
},

I don't see -I/home/lebedevri/rawspeed/src/librawspeed/common in there, it just searches [first] for cpuid.h in the same directory where the source file located.

Confirming it's passing that build stage now on Windows, thanks. (Black level interface update is indeed next.)

Something close to

Author: Roman Lebedev 
Date: Mon Jan 22 16:13:09 2024 +0300

tmp

diff --git a/src/imageio/imageio_rawspeed.cc b/src/imageio/imageio_rawspeed.cc
index c17e877cfc..80729eb158 100644
--- a/src/imageio/imageio_rawspeed.cc
+++ b/src/imageio/imageio_rawspeed.cc
@@ -208,16 +208,14 @@ dt_imageio_retval_t dt_imageio_open_rawspeed(dt_image_t *img,
 img->raw_black_level = r->blackLevel;
 img->raw_white_point = r->whitePoint;

- if(!r->blackAreas.empty() || r->blackLevelSeparate[0] == -1
- || r->blackLevelSeparate[1] == -1
- || r->blackLevelSeparate[2] == -1
- || r->blackLevelSeparate[3] == -1)
+ if(!r->blackAreas.empty() || !r->blackLevelSeparate)
 {
 r->calculateBlackAreas();
 }

+ const auto bl = *(r->blackLevelSeparate->getAsArray1DRef());
 for(uint8_t i = 0; i < 4; i++)
- img->raw_black_level_separate[i] = r->blackLevelSeparate[i];
+ img->raw_black_level_separate[i] = bl(i);

if(r->blackLevel == -1)
 {
@@ -398,7 +396,7 @@ dt_imageio_retval_t dt_imageio_open_rawspeed(dt_image_t *img,
 r->metadata.model.c_str(),
 r->metadata.mode.c_str());

- if(cam && cam->supportStatus == Camera::SupportStatus::NoSamples)
+ if(cam && cam->supportStatus == Camera::SupportStatus::SupportedNoSamples)
 img->camera_missing_sample = TRUE;
 }
 catch(const std::exception &exc)
@@ -551,7 +549,7 @@ dt_imageio_retval_t dt_imageio_open_rawspeed_sraw(dt_image_t *img,
 r->metadata.model.c_str(),
 r->metadata.mode.c_str());

- if(cam && cam->supportStatus == Camera::SupportStatus::NoSamples)
+ if(cam && cam->supportStatus == Camera::SupportStatus::SupportedNoSamples)
 img->camera_missing_sample = TRUE;

return DT_IMAGEIO_OK;

Unbreak build on case-insensitive fs

For some reason first searches in current directory, just like it's `""` form, and after c2a68bb4124265eaf3b3dbac910b9cf32213943c, `Cpuid.h` is in the current directory of the compiler for `Cpuid.cpp`, which results in wrong file being included (global not being included)

https://github.com/darktable-org/rawspeed/pull/610#issuecomment-1903750793

Merge pull request #613 from LebedevRI/next

BitStream: sizes are signed!!!11, sprinkle invariants

Again, slight improvement in BitPumpJPEGBenchmark, but otherwise mostly performance-neutral.

build-Clang17-release$ /usr/src/googlebenchmark/tools/compare.py -a benchmarks bench/librawspeed/io/BitPumpJPEGBenchmark{-old,} --benchmark_repetitions=9 --benchmark_min_warmup_time=0.5RUNNING: bench/librawspeed/io/BitPumpJPEGBenchmark-old --benchmark_repetitions=9 --benchmark_min_warmup_time=0.5 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmpz8tka7pa
2024-01-22T05:11:06+03:00
Running bench/librawspeed/io/BitPumpJPEGBenchmark-old
Run on (32 X 3402.99 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 6.12, 5.70, 6.46
---------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
---------------------------------------------------------------------------------------------------
BM_BitPumpJPEG/Stuffed/16777216_mean 8844 us 8842 us 9 Latency=527.011ps Throughput=1.76727Gi/s
BM_BitPumpJPEG/Stuffed/16777216_median 8823 us 8821 us 9 Latency=525.793ps Throughput=1.77127Gi/s
BM_BitPumpJPEG/Stuffed/16777216_stddev 68.7 us 68.3 us 9 Latency=4.07066ps Throughput=13.7753Mi/s
BM_BitPumpJPEG/Stuffed/16777216_cv 0.78 % 0.77 % 9 Latency=0.77% Throughput=0.76%
BM_BitPumpJPEG/Unstuffed/16777216_mean 8044 us 8040 us 9 Latency=479.248ps Throughput=1.9433Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_median 8045 us 8040 us 9 Latency=479.237ps Throughput=1.94334Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_stddev 1.69 us 2.29 us 9 Latency=136.419fs Throughput=579.907Ki/s
BM_BitPumpJPEG/Unstuffed/16777216_cv 0.02 % 0.03 % 9 Latency=0.03% Throughput=0.03%
RUNNING: bench/librawspeed/io/BitPumpJPEGBenchmark --benchmark_repetitions=9 --benchmark_min_warmup_time=0.5 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmpmrvdb0x_
2024-01-22T05:11:40+03:00
Running bench/librawspeed/io/BitPumpJPEGBenchmark
Run on (32 X 3400 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 3.85, 5.18, 6.26
---------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
---------------------------------------------------------------------------------------------------
BM_BitPumpJPEG/Stuffed/16777216_mean 8314 us 8312 us 9 Latency=495.409ps Throughput=1.8801Gi/s
BM_BitPumpJPEG/Stuffed/16777216_median 8283 us 8281 us 9 Latency=493.574ps Throughput=1.8869Gi/s
BM_BitPumpJPEG/Stuffed/16777216_stddev 90.0 us 90.3 us 9 Latency=5.37952ps Throughput=20.3907Mi/s
BM_BitPumpJPEG/Stuffed/16777216_cv 1.08 % 1.09 % 9 Latency=1.09% Throughput=1.06%
BM_BitPumpJPEG/Unstuffed/16777216_mean 7430 us 7427 us 9 Latency=442.694ps Throughput=2.10376Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_median 7430 us 7425 us 9 Latency=442.588ps Throughput=2.10426Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_stddev 7.38 us 7.18 us 9 Latency=427.866fs Throughput=2.07939Mi/s
BM_BitPumpJPEG/Unstuffed/16777216_cv 0.10 % 0.10 % 9 Latency=0.10% Throughput=0.10%
Comparing bench/librawspeed/io/BitPumpJPEGBenchmark-old to bench/librawspeed/io/BitPumpJPEGBenchmark
Benchmark Time CPU Time Old Time New CPU Old CPU New
---------------------------------------------------------------------------------------------------------------------------------------
BM_BitPumpJPEG/Stuffed/16777216_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM_BitPumpJPEG/Stuffed/16777216_mean -0.0599 -0.0600 8844 8314 8842 8312
BM_BitPumpJPEG/Stuffed/16777216_median -0.0612 -0.0613 8823 8283 8821 8281
BM_BitPumpJPEG/Stuffed/16777216_stddev +0.3094 +0.3215 69 90 68 90
BM_BitPumpJPEG/Stuffed/16777216_cv +0.3929 +0.4058 0 0 0 0
BM_BitPumpJPEG/Unstuffed/16777216_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM_BitPumpJPEG/Unstuffed/16777216_mean -0.0764 -0.0763 8044 7430 8040 7427
BM_BitPumpJPEG/Unstuffed/16777216_median -0.0765 -0.0765 8045 7430 8040 7425
BM_BitPumpJPEG/Unstuffed/16777216_stddev +3.3668 +2.1364 2 7 2 7
BM_BitPumpJPEG/Unstuffed/16777216_cv +3.7282 +2.3954 0 0 0 0
OVERALL_GEOMEAN -0.0682 -0.0682 0 0 0 0

BitStream: sizes are signed!!!11, sprinkle invariants

Again, slight improvement in BitPumpJPEGBenchmark, but otherwise mostly performance-neutral.

build-Clang17-release$ /usr/src/googlebenchmark/tools/compare.py -a benchmarks bench/librawspeed/io/BitPumpJPEGBenchmark{-old,} --benchmark_repetitions=9 --benchmark_min_warmup_time=0.5RUNNING: bench/librawspeed/io/BitPumpJPEGBenchmark-old --benchmark_repetitions=9 --benchmark_min_warmup_time=0.5 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmpz8tka7pa
2024-01-22T05:11:06+03:00
Running bench/librawspeed/io/BitPumpJPEGBenchmark-old
Run on (32 X 3402.99 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 6.12, 5.70, 6.46
---------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
---------------------------------------------------------------------------------------------------
BM_BitPumpJPEG/Stuffed/16777216_mean 8844 us 8842 us 9 Latency=527.011ps Throughput=1.76727Gi/s
BM_BitPumpJPEG/Stuffed/16777216_median 8823 us 8821 us 9 Latency=525.793ps Throughput=1.77127Gi/s
BM_BitPumpJPEG/Stuffed/16777216_stddev 68.7 us 68.3 us 9 Latency=4.07066ps Throughput=13.7753Mi/s
BM_BitPumpJPEG/Stuffed/16777216_cv 0.78 % 0.77 % 9 Latency=0.77% Throughput=0.76%
BM_BitPumpJPEG/Unstuffed/16777216_mean 8044 us 8040 us 9 Latency=479.248ps Throughput=1.9433Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_median 8045 us 8040 us 9 Latency=479.237ps Throughput=1.94334Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_stddev 1.69 us 2.29 us 9 Latency=136.419fs Throughput=579.907Ki/s
BM_BitPumpJPEG/Unstuffed/16777216_cv 0.02 % 0.03 % 9 Latency=0.03% Throughput=0.03%
RUNNING: bench/librawspeed/io/BitPumpJPEGBenchmark --benchmark_repetitions=9 --benchmark_min_warmup_time=0.5 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmpmrvdb0x_
2024-01-22T05:11:40+03:00
Running bench/librawspeed/io/BitPumpJPEGBenchmark
Run on (32 X 3400 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 3.85, 5.18, 6.26
---------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
---------------------------------------------------------------------------------------------------
BM_BitPumpJPEG/Stuffed/16777216_mean 8314 us 8312 us 9 Latency=495.409ps Throughput=1.8801Gi/s
BM_BitPumpJPEG/Stuffed/16777216_median 8283 us 8281 us 9 Latency=493.574ps Throughput=1.8869Gi/s
BM_BitPumpJPEG/Stuffed/16777216_stddev 90.0 us 90.3 us 9 Latency=5.37952ps Throughput=20.3907Mi/s
BM_BitPumpJPEG/Stuffed/16777216_cv 1.08 % 1.09 % 9 Latency=1.09% Throughput=1.06%
BM_BitPumpJPEG/Unstuffed/16777216_mean 7430 us 7427 us 9 Latency=442.694ps Throughput=2.10376Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_median 7430 us 7425 us 9 Latency=442.588ps Throughput=2.10426Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_stddev 7.38 us 7.18 us 9 Latency=427.866fs Throughput=2.07939Mi/s
BM_BitPumpJPEG/Unstuffed/16777216_cv 0.10 % 0.10 % 9 Latency=0.10% Throughput=0.10%
Comparing bench/librawspeed/io/BitPumpJPEGBenchmark-old to bench/librawspeed/io/BitPumpJPEGBenchmark
Benchmark Time CPU Time Old Time New CPU Old CPU New
---------------------------------------------------------------------------------------------------------------------------------------
BM_BitPumpJPEG/Stuffed/16777216_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM_BitPumpJPEG/Stuffed/16777216_mean -0.0599 -0.0600 8844 8314 8842 8312
BM_BitPumpJPEG/Stuffed/16777216_median -0.0612 -0.0613 8823 8283 8821 8281
BM_BitPumpJPEG/Stuffed/16777216_stddev +0.3094 +0.3215 69 90 68 90
BM_BitPumpJPEG/Stuffed/16777216_cv +0.3929 +0.4058 0 0 0 0
BM_BitPumpJPEG/Unstuffed/16777216_pvalue 0.0004 0.0004 U Test, Repetitions: 9 vs 9
BM_BitPumpJPEG/Unstuffed/16777216_mean -0.0764 -0.0763 8044 7430 8040 7427
BM_BitPumpJPEG/Unstuffed/16777216_median -0.0765 -0.0765 8045 7430 8040 7425
BM_BitPumpJPEG/Unstuffed/16777216_stddev +3.3668 +2.1364 2 7 2 7
BM_BitPumpJPEG/Unstuffed/16777216_cv +3.7282 +2.3954 0 0 0 0
OVERALL_GEOMEAN -0.0682 -0.0682 0 0 0 0

Describe the bug

Exiv2 outputs a number for all XMP fields which should normally contain Date Strings, according to Adobes Specification.

> exiv2 -p at --key Xmp.video.MediaCreateDate print D:\Kamera\MP_ROOT\100ANV01\MAH00696.MP4
Xmp.video.MediaCreateDate XmpText 10 3593683864

I am going to be honest, maybe i am just to silly to properly parse this value, but the number doesn't seem to be a unix timestamp. I am calling exiv2 as part of a rust program i am writing to organize my media files. For videos and audio, exiv2 isn't as reliable with timestamps.

To Reproduce

  1. Download the current Release "v0.28.1"
  2. Download this file, copyright by me, taken on my Sony Digital Camera a few years back, permission is granted for it to be distributed under the project's license
  3. https://drive.google.com/file/d/19mA8gwNb-YCuUg__f4eJyaAdkNiSq_7j/view?usp=sharing
  4. Run the exiv2 command to print XMP fields, and see that the MediaCreateDate Field (and others) are set to some number, that doesnt look like a Date String (e.g 2017-11-16T13:31:04Z ), isn't the correct Unix Timestamp.
  5. Open Windows Properties and run exiftool, to see that the property is interpreted differently by other programs
> exiv2 -p at --key Xmp.video.MediaCreateDate print D:\Kamera\MP_ROOT\100ANV01\MAH00696.MP4
Xmp.video.MediaCreateDate XmpText 10 3593683864
exiftool D:\Kamera\MP_ROOT\100ANV01\MAH00696.MP4
ExifTool Version Number : 12.67
File Name : MAH00696.MP4

File Modification Date/Time : 2017:11:16 13:34:20+01:00
File Access Date/Time : 2023:02:11 02:02:18+01:00
File Creation Date/Time : 2023:02:11 02:02:19+01:00

Create Date : 2017:11:16 13:31:04
Modify Date : 2017:11:16 13:34:18

Track Create Date : 2017:11:16 13:31:04
Track Modify Date : 2017:11:16 13:34:18
Track ID : 1

Media Header Version : 0
Media Create Date : 2017:11:16 13:31:04
Media Modify Date : 2017:11:16 13:34:18
Media Time Scale : 48000
Media Duration : 0:03:14
Media Language Code : und

Expected behavior

I expect an output like

Xmp.video.MediaCreateDate Date 2017-11-16 13:31:04

Although WIndows "Properties" show a different date: 2017-01-28 16:49

Desktop (please complete the following information):

  • OS and version: Windows 11
  • Exiv2 version and source: exiv2-0.28.1-2019msvc64 from github.com
  • Compiler and version: ...
  • Compilation mode and/or compiler flags: ...

TBH, I couldn't find an authoritative source for this Xmp.video "XMP Extended Video schema".

The exiv2 code says this:

https://github.com/Exiv2/exiv2/blob/cac362c7285237f55bc45ee3fe1effba1f7b741a/src/properties.cpp#L3682-L3684

If this is the correct number in seconds, then I don't think it is a bug, unless the actual schema specification says this should be of "date" type.

Huh! - i guess i just didn't read far enough into it - i couldn't find anything for the exact property name either - which is definetly wierd, but i brushed it off. And pretended that "Created" was close enough

If the Timestamp is since January 1904, that actually makes sense!

If i add -2082848400 to the timestamp before parsing it (as a unix timestamp) the value is completely accurate, and the same one as is output by other commands! Thank you so much for your help... and sorry that i filed this as a bug, i didn't know better

In the future i'll know to look in properties.cpp

Merge pull request #612 from LebedevRI/next

BitPumpJPEG::fillCache(): make it -7% faster

... does not seem to have similar effect outside of the microbenchmark, though.

Solving the issue caused by Exiv2 version 0.28.x during compiling /tools/basecurve (do we still need that tool?) with Apple clang. Solution copied from darktable by changing c++ standard to 17.

Issue #8 in my fork... Could be confusing.

I just managed to get Ansel working on Apple M1 MacOS 14.2.1 by installing from source Exiv2@v0.27.6 using brew and modifying the exiv2.rb... I would only have to wait one night ahahah Thank you! :-)

Quality Gate Failed Quality Gate failed

Failed conditions

C Reliability Rating on New Code (required ≥ A)

See analysis details on SonarCloud

idea Catch issues before they fail your Quality Gate with our IDE extension SonarLint SonarLint

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 18

Star 563

Pull requests: aurelienpierreeng/ansel

New pull request New

7 Open

77 Closed

7 Open

77 Closed

No reviews

Review required

Approved review

Changes requested

Assigned to nobody

Newest

Oldest

Most commented

Least commented

Recently updated

Least recently updated

Best match

Most reactions

👍

👎

😄

🎉

😕

❤️

🚀

👀

Pull requests list

Enable Exiv2 0.28.x compilation. Solving issue #8

319

opened Jan 21, 2024 by lologor

Loading…

3

Import Session : Removed issues for Windows and some other code adjustments

294

opened Dec 22, 2023 by Jiyone

Draft

1

Color eq 2

283

opened Dec 18, 2023 by aurelienpierre

Loading…

1.0

6

Remove leading and trailing space in generated folder and file names

difficulty: medium

enhancement

priority: low

Affects optional and niche functionnalities

244

opened Dec 4, 2023 by Jiyone

Draft

1

WIP : prefilter denoise module

102

opened Feb 23, 2023 by aurelienpierre

Loading…

2

First Spanish translation update

47

opened Dec 19, 2022 by EdgarLux

Loading…

5

masks: allow drawing in viewport space

45

opened Dec 12, 2022 by aurelienpierre

Loading…

1

ProTip! Add no:assignee to see everything that’s not assigned.

You can’t perform that action at this time.

BitPumpJPEG::fillCache(): make it -7% faster

build-Clang17-release$ /usr/src/googlebenchmark/tools/compare.py -a benchmarks bench/librawspeed/io/BitPumpJPEGBenchmark{-old,} --benchmark_repetitions=27 --benchmark_min_warmup_time=0.5
RUNNING: bench/librawspeed/io/BitPumpJPEGBenchmark-old --benchmark_repetitions=27 --benchmark_min_warmup_time=0.5 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmpefyylhrh
2024-01-22T00:02:21+03:00
Running bench/librawspeed/io/BitPumpJPEGBenchmark-old
Run on (32 X 3403.04 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 1.05, 0.85, 0.77
---------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
---------------------------------------------------------------------------------------------------
BM_BitPumpJPEG/Stuffed/16777216_mean 9443 us 9443 us 27 Latency=562.847ps Throughput=1.65478Gi/s
BM_BitPumpJPEG/Stuffed/16777216_median 9418 us 9417 us 27 Latency=561.327ps Throughput=1.65915Gi/s
BM_BitPumpJPEG/Stuffed/16777216_stddev 80.9 us 81.0 us 27 Latency=4.82567ps Throughput=14.1598Mi/s
BM_BitPumpJPEG/Stuffed/16777216_cv 0.86 % 0.86 % 27 Latency=0.86% Throughput=0.84%
BM_BitPumpJPEG/Unstuffed/16777216_mean 8664 us 8664 us 27 Latency=516.409ps Throughput=1.80349Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_median 8652 us 8651 us 27 Latency=515.655ps Throughput=1.8061Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_stddev 34.6 us 34.5 us 27 Latency=2.05914ps Throughput=7.24925Mi/s
BM_BitPumpJPEG/Unstuffed/16777216_cv 0.40 % 0.40 % 27 Latency=0.40% Throughput=0.39%
RUNNING: bench/librawspeed/io/BitPumpJPEGBenchmark --benchmark_repetitions=27 --benchmark_min_warmup_time=0.5 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmplkunll6n
2024-01-22T00:03:46+03:00
Running bench/librawspeed/io/BitPumpJPEGBenchmark
Run on (32 X 3400 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 1.34, 1.02, 0.84
---------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
---------------------------------------------------------------------------------------------------
BM_BitPumpJPEG/Stuffed/16777216_mean 8811 us 8811 us 27 Latency=525.176ps Throughput=1.77337Gi/s
BM_BitPumpJPEG/Stuffed/16777216_median 8804 us 8804 us 27 Latency=524.747ps Throughput=1.7748Gi/s
BM_BitPumpJPEG/Stuffed/16777216_stddev 31.3 us 31.3 us 27 Latency=1.86386ps Throughput=6.34975Mi/s
BM_BitPumpJPEG/Stuffed/16777216_cv 0.35 % 0.35 % 27 Latency=0.35% Throughput=0.35%
BM_BitPumpJPEG/Unstuffed/16777216_mean 8040 us 8040 us 27 Latency=479.229ps Throughput=1.94338Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_median 8036 us 8035 us 27 Latency=478.946ps Throughput=1.94452Gi/s
BM_BitPumpJPEG/Unstuffed/16777216_stddev 11.4 us 11.3 us 27 Latency=674.916fs Throughput=2.79196Mi/s
BM_BitPumpJPEG/Unstuffed/16777216_cv 0.14 % 0.14 % 27 Latency=0.14% Throughput=0.14%
Comparing bench/librawspeed/io/BitPumpJPEGBenchmark-old to bench/librawspeed/io/BitPumpJPEGBenchmark
Benchmark Time CPU Time Old Time New CPU Old CPU New
---------------------------------------------------------------------------------------------------------------------------------------
BM_BitPumpJPEG/Stuffed/16777216_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
BM_BitPumpJPEG/Stuffed/16777216_mean -0.0669 -0.0669 9443 8811 9443 8811
BM_BitPumpJPEG/Stuffed/16777216_median -0.0652 -0.0652 9418 8804 9417 8804
BM_BitPumpJPEG/Stuffed/16777216_stddev -0.6136 -0.6138 81 31 81 31
BM_BitPumpJPEG/Stuffed/16777216_cv -0.5859 -0.5861 0 0 0 0
BM_BitPumpJPEG/Unstuffed/16777216_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
BM_BitPumpJPEG/Unstuffed/16777216_mean -0.0720 -0.0720 8664 8040 8664 8040
BM_BitPumpJPEG/Unstuffed/16777216_median -0.0712 -0.0712 8652 8036 8651 8035
BM_BitPumpJPEG/Unstuffed/16777216_stddev -0.6712 -0.6722 35 11 35 11
BM_BitPumpJPEG/Unstuffed/16777216_cv -0.6457 -0.6468 0 0 0 0
OVERALL_GEOMEAN -0.0694 -0.0695 0 0 0 0

Merge pull request #611 from LebedevRI/next

Add benchmark for LJpeg byte stream unstuffing

Add benchmark for LJpeg byte stream unstuffing

It's quite useful to understand just how frequent are the 0xFF00 sequences on average.

The frequencies are sampled from all the LJpeg streams that are (as of this moment) read by library from the entirety of the RPU sample set.

Reapply "provo-mirror.opensuse.org is broken, is there a better one?"

This reverts commit 0cd0edc1d89b293c208c67f240d4929c7c43b54a.

It's quite useful to understand just how frequent are the 0xFF00 sequences on average.

The frequences are sampled from all the LJpeg streams that are (as of this moment) read by library from the entirety of the RPU sample set.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (9e34fb6) 59.19% compared to head (225dab0) 59.40%.

@@ Coverage Diff @@
## develop #611 +/- ##
===========================================
+ Coverage 59.19% 59.40% +0.21% 
===========================================
 Files 251 252 +1 
 Lines 14869 14949 +80 
 Branches 2001 2008 +7 
===========================================
+ Hits 8801 8881 +80 
 Misses 5952 5952 
 Partials 116 116 
Flag Coverage Δ
benchmarks 11.04% <100.00%> (+0.53%) :arrow_up:
integration 46.45% <0.00%> (-0.26%) :arrow_down:
linux 57.11% <86.11%> (+0.19%) :arrow_up:
macOS 21.23% <98.75%> (+0.45%) :arrow_up:
rpu_u 46.45% <0.00%> (-0.26%) :arrow_down:
unittests 18.20% <0.00%> (-0.10%) :arrow_down:
windows ∅ <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Description of the bug

To Reproduce

No idea how to reproduce, this happens only with one new image of a new batch I just imported, all of them have received the same default steps, and only this one seems to triggers this problem. Ansel completely locks up when i want to add an operation, consuming 1 cpu. It happens on latest git version, on archlinux. It fails with both opencl activated and deactivated

For example here, I tried to apply the velvia emulation…

Here is the backtrace of an attached gdb while this is occurring, if that helps:

(gdb) bt
#0 0x00007e74475f68d6 in _dev_add_history_item_ext
 (dev=dev@entry=0x5f3257c4e300, module=module@entry=0x5f325909a890, enable=enable@entry=0, force_new_item=force_new_item@entry=0, no_image=no_image@entry=0, include_masks=include_masks@entry=0)
 at /usr/src/debug/ansel-git/ansel/src/develop/develop.c:698
#1 0x00007e74475fae1d in _dev_add_history_item (new_item=0, enable=0, module=0x5f325909a890, dev=0x5f3257c4e300) at /usr/src/debug/ansel-git/ansel/src/develop/develop.c:828
#2 _dev_add_history_item (dev=0x5f3257c4e300, module=0x5f325909a890, enable=0, new_item=0) at /usr/src/debug/ansel-git/ansel/src/develop/develop.c:819
#3 0x00007e74475faf3f in dt_dev_add_history_item_real (dev=0x5f3257c4e300, module=, enable=) at /usr/src/debug/ansel-git/ansel/src/develop/develop.c:856
#4 0x00007e74476dfe4a in dt_gui_presets_apply_preset (name=0x5f325b2c4c20 "Fuji Velvia emulation", module=module@entry=0x5f325909a890) at /usr/src/debug/ansel-git/ansel/src/gui/presets.c:912
#5 0x00007e74476e019a in _menuitem_pick_preset (module=0x5f325909a890, menuitem=0x5f325b2f8b20) at /usr/src/debug/ansel-git/ansel/src/gui/presets.c:925
#6 _menuitem_button_released_preset (menuitem=0x5f325b2f8b20, event=, module=0x5f325909a890) at /usr/src/debug/ansel-git/ansel/src/gui/presets.c:996
#7 0x00007e7446672d2b in () at /usr/lib/libgtk-3.so.0
#8 0x00007e744608f6c0 in g_closure_invoke () at /usr/lib/libgobject-2.0.so.0
#9 0x00007e74460bda36 in () at /usr/lib/libgobject-2.0.so.0
#10 0x00007e74460ae335 in () at /usr/lib/libgobject-2.0.so.0
#11 0x00007e74460aec77 in g_signal_emit_valist () at /usr/lib/libgobject-2.0.so.0
#12 0x00007e74460aed34 in g_signal_emit () at /usr/lib/libgobject-2.0.so.0
#13 0x00007e74469affd5 in () at /usr/lib/libgtk-3.so.0
#14 0x00007e7446819fdb in () at /usr/lib/libgtk-3.so.0
#15 0x00007e744681ad78 in gtk_main_do_event () at /usr/lib/libgtk-3.so.0
#16 0x00007e74465235a7 in () at /usr/lib/libgdk-3.so.0
#17 0x00007e744658ea48 in () at /usr/lib/libgdk-3.so.0
#18 0x00007e7446ef4f69 in () at /usr/lib/libglib-2.0.so.0
#19 0x00007e7446f53367 in () at /usr/lib/libglib-2.0.so.0
#20 0x00007e7446ef5b97 in g_main_loop_run () at /usr/lib/libglib-2.0.so.0
#21 0x00007e7446818dc7 in gtk_main () at /usr/lib/libgtk-3.so.0
#22 0x00007e74476b6d83 in dt_gui_gtk_run (gui=) at /usr/src/debug/ansel-git/ansel/src/gui/gtk.c:778
#23 0x00005f3254df1059 in main (argc=, argv=) at /usr/src/debug/ansel-git/ansel/src/main.c:94
(gdb) bt full
#0 0x00007e74475f68d6 in _dev_add_history_item_ext
 (dev=dev@entry=0x5f3257c4e300, module=module@entry=0x5f325909a890, enable=enable@entry=0, force_new_item=force_new_item@entry=0, no_image=no_image@entry=0, include_masks=include_masks@entry=0)
 at /usr/src/debug/ansel-git/ansel/src/develop/develop.c:698
 prior_hist = 0x5f32590d7990
 prior_history = 0x5f32590d5d60 = {0x5f32590d7990, 0x5f32590d7aa0, 0x5f32590d64b0, 0x5f32590d6590, 0x5f32590d6670, 0x5f32590d6750, 0x5f32590d7da0, 0x5f32590d8030, 0x5f32590d6890, 0x5f32590d8cd0, 0x5f32590d93a0, 0x5f32590d9a70, 0x5f32590d9d50, 0x5f32590da030, 0x5f32590da310, 0x5f32590da5e0, 0x5f32590da8b0}
 hist = 0x5f32590da310
 earlier_entry = 0
 history = 0x5f32590da5c0 = {0x5f32590da310, 0x5f32590da5e0, 0x5f32590da8b0}
 last = 
 new_is_old = 
 hist = 
#1 0x00007e74475fae1d in _dev_add_history_item (new_item=0, enable=0, module=0x5f325909a890, dev=0x5f3257c4e300) at /usr/src/debug/ansel-git/ansel/src/develop/develop.c:828
 imgid = 
 tagid = 24370
 tag_change = 
 __FUNCTION__ = "_dev_add_history_item"
 __FUNCTION__ = "_dev_add_history_item"
#2 _dev_add_history_item (dev=0x5f3257c4e300, module=0x5f325909a890, enable=0, new_item=0) at /usr/src/debug/ansel-git/ansel/src/develop/develop.c:819
 __FUNCTION__ = "_dev_add_history_item"
#3 0x00007e74475faf3f in dt_dev_add_history_item_real (dev=0x5f3257c4e300, module=, enable=) at /usr/src/debug/ansel-git/ansel/src/develop/develop.c:856
 __FUNCTION__ = "dt_dev_add_history_item_real"
#4 0x00007e74476dfe4a in dt_gui_presets_apply_preset (name=0x5f325b2c4c20 "Fuji Velvia emulation", module=module@entry=0x5f325909a890) at /usr/src/debug/ansel-git/ansel/src/gui/presets.c:912
 stmt = 0x5f32563c19d0
 __FUNCTION__ = "dt_gui_presets_apply_preset"
#5 0x00007e74476e019a in _menuitem_pick_preset (module=0x5f325909a890, menuitem=0x5f325b2f8b20) at /usr/src/debug/ansel-git/ansel/src/gui/presets.c:925
 name = 
#6 _menuitem_button_released_preset (menuitem=0x5f325b2f8b20, event=, module=0x5f325909a890) at /usr/src/debug/ansel-git/ansel/src/gui/presets.c:996
#7 0x00007e7446672d2b in () at /usr/lib/libgtk-3.so.0
#8 0x00007e744608f6c0 in g_closure_invoke () at /usr/lib/libgobject-2.0.so.0
#9 0x00007e74460bda36 in () at /usr/lib/libgobject-2.0.so.0
#10 0x00007e74460ae335 in () at /usr/lib/libgobject-2.0.so.0
#11 0x00007e74460aec77 in g_signal_emit_valist () at /usr/lib/libgobject-2.0.so.0
#12 0x00007e74460aed34 in g_signal_emit () at /usr/lib/libgobject-2.0.so.0
#13 0x00007e74469affd5 in () at /usr/lib/libgtk-3.so.0
#14 0x00007e7446819fdb in () at /usr/lib/libgtk-3.so.0
#15 0x00007e744681ad78 in gtk_main_do_event () at /usr/lib/libgtk-3.so.0
#16 0x00007e74465235a7 in () at /usr/lib/libgdk-3.so.0
#17 0x00007e744658ea48 in () at /usr/lib/libgdk-3.so.0
#18 0x00007e7446ef4f69 in () at /usr/lib/libglib-2.0.so.0
#19 0x00007e7446f53367 in () at /usr/lib/libglib-2.0.so.0
#20 0x00007e7446ef5b97 in g_main_loop_run () at /usr/lib/libglib-2.0.so.0
#21 0x00007e7446818dc7 in gtk_main () at /usr/lib/libgtk-3.so.0
#22 0x00007e74476b6d83 in dt_gui_gtk_run (gui=) at /usr/src/debug/ansel-git/ansel/src/gui/gtk.c:778
 widget = 
 allocation = {x = 0, y = 0, width = 2246, height = 1358}
#23 0x00005f3254df1059 in main (argc=, argv=) at /usr/src/debug/ansel-git/ansel/src/main.c:94

Which commit introduced the error

I don't know. I can bisect it if needed

System

  • ansel version : ge2c4a0a60
  • OS : Linux, 6.7.0-zen3-1.1-zen
  • Linux - Distro : archlinux
  • Memory : 32GB
  • Graphics card : Radeon RX 6650 XT
  • Graphics driver : amdgpu
  • OpenCL installed : yes
  • OpenCL activated : no
  • Xorg : wayland
  • Desktop : kde
  • GTK+ : gtk3 1:3.24.40-2.1
  • gcc : 13.2.1
  • cflags : -march=x86-64-v3 -O0 -pipe -fno-plt -fexceptions \ -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security \ -fstack-clash-protection -fcf-protection"
  • CMAKE_BUILD_TYPE : cmake -B build \ -DCMAKE_INSTALL_PREFIX=/usr \ -DCMAKE_INSTALL_LIBDIR=lib \ -DCMAKE_INSTALL_LIBEXECDIR=lib \ -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_SKIP_RPATH=ON \ -DBINARY_PACKAGE_BUILD=ON \ -DUSE_LIBSECRET=ON \ -DUSE_LUA=ON \ -DUSE_BUNDLED_LUA=OFF \ -DUSE_LIBRAW=ON \ -DUSE_BUNDLED_LIBRAW=OFF \ -DUSE_COLORD=ON \ -DBUILD_CURVE_TOOLS=ON \ -DBUILD_NOISE_TOOLS=ON \ -DRAWSPEED_ENABLE_LTO=ON

Ok, I've looked a little bit more into the debugging...

gdb tells me it's stuck looping on https://github.com/aurelienpierreeng/ansel/blob/e2c4a0a60cd80f741dd3d3c6ab72be9ac11234fb/src/develop/develop.c#L693. Perf top confirms the hot lines are 694 and 698, the program is only doing this (and spending time in g_list_nth)

I don't know what's wrong in my history to trigger this... I can attach it here if it helps

I forgot to add: archlinux, git head…

Haven't actually used the new "unknown-no-samples" tag (as first planned) because it is basically the same as the supported X-A10.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (9e34fb6) 59.19% compared to head (babce5c) 59.19%.

@@ Coverage Diff @@
## develop #610 +/- ##
========================================
 Coverage 59.19% 59.19% 
========================================
 Files 251 251 
 Lines 14869 14869 
 Branches 2001 2001 
========================================
 Hits 8801 8801 
 Misses 5952 5952 
 Partials 116 116 
Flag Coverage Δ
benchmarks 10.50% <ø> (ø)
integration 46.70% <ø> (ø)
linux 56.92% <ø> (ø)
macOS 20.77% <ø> (ø)
rpu_u 46.70% <ø> (ø)
unittests 18.30% <ø> (ø)
windows ∅ <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Thank you for looking into this.

The thing is, i really don't want to add explicit support for something, when it fails to meet the minimal requirements, namely: a sample on RPU. On the other hand, i'd gladly take an entry similar to that of https://github.com/darktable-org/rawspeed/commit/8b50aa4fc4b2676938cf54d0c0df461129569558, which, as discussed, may actually help us get said sample.

On the other hand, i'd gladly take an entry similar to that of 8b50aa4, which, as discussed, may actually help us get said sample.

Well, that's how I started... ;)

On the other hand, i'd gladly take an entry similar to that of 8b50aa4, which, as discussed, may actually help us get said sample.

Well, that's how I started... ;)

I'm sorry, i'm failing to comprehend the meaning. What i'm saying is that i don't want to add new cameras.xml entries with mode not being one of yes, no, unknown, unknown-no-samples.

I'm sorry, i'm failing to comprehend the meaning.

I was going to add this one as "unknown-no-samples", but then I figured it is sort of supported (not entirely unknown).

As I mentioned in the discussion way before, I don't really see any difference between "no-samples" and "unknown-no-samples" in the darktable UI, the warning message is the same in the current implementation. Perhaps it'll come in the future...

I'll change the PR anyway.

I was going to add this one as "unknown-no-samples", but then I figured it is sort of supported (not entirely unknown).

Right, i'm not arguing with that. I'm sure it works with the entry added. My point is that i don't want to add said entry (other than ``) with no sample :S (== no test coverage)

As I mentioned in the discussion way before, I don't really see any difference between "no-samples" and "unknown-no-samples" in the darktable UI, the warning message is the same in the current implementation. Perhaps it'll come in the future...

Is that with submodule updated, or just edited cameras.xml? With unknown-no-samples it will completely refuse to load in darktable.

Is that with submodule updated, or just edited cameras.xml?

Good point, I'll retest when I get a chance.

Is that with submodule updated, or just edited cameras.xml?

Good point, I'll retest when I get a chance.

Note that i suspect you'll need to update black level copying code in imageio_rawspeed.

Note that i suspect you'll need to update black level copying code in imageio_rawspeed.

Hm, it fails building as a dt subproject even before getting to imageio_rawspeed. It build fine standalone though?!

C:/msys64/home/kmilos/darktable/src/external/rawspeed/src/librawspeed/common/Cpuid.cpp:37:8: error: '__get_cpuid' was not declared in this scope
 37 | if (!__get_cpuid(1, &eax, &ebx, &ecx, &edx))
 | ^~~~~~~~~~~
C:/msys64/home/kmilos/darktable/src/external/rawspeed/src/librawspeed/common/Cpuid.cpp:40:16: error: 'bit_SSE2' was not declared in this scope; did you mean 'WITH_SSE2'?
 40 | return edx & bit_SSE2;
 | ^~~~~~~~
 | WITH_SSE2

I guess I need to sync up some of the dt cmake modules as well?

Huh, that makes no sense. You have removed the build directory first, right?

Huh, that makes no sense. You have removed the build directory first, right?

Yep... It's almost as if the system ` doesn't get included, i.e.common/Cpuid.h` gets priority and is included twice?!

Hm, it fails building as a dt subproject even before getting to imageio_rawspeed.

I just tried locally, and it works fine for me.

I just tried locally, and it works fine for me.

Windows? Could be because it can't tel between "Cpuid.h" and ? But why when it's only as subproject and not standalone?

I just tried locally, and it works fine for me.

Windows? No.

Could be because it can't tel between "Cpuid.h" and ? But why when it's only as subproject and not standalone?

None of that stuff has been touched for a while now. Does dt master compile for you as-is?

FWIW, if i rename common/Cpuid.h file to common/cpuid.h (and fix includes), it does fail just the way you've described. So the checkout must be on a case-sensitive filesystem. But why does it fail for you now? Sounds like you changed something on your side.

Does dt master compile for you as-is?

Yep, no problem if I go back on dt master w/

cd src/external/rawspeed
git checkout 1e505de
cd ../../..
rm -rf build

Well, can you git bisect it then please?

Well, can you git bisect it then please?

Can try when I get another chance.

Well, can you git bisect it then please?

Supposedly Windows dt build didn't like c2a68bb4124265eaf3b3dbac910b9cf32213943c

target_include_directories(rawspeed_common SYSTEM PUBLIC "${RAWSPEED_SOURCE_DIR}/src/external") perhaps makes common/Cpuid.h come before `` on case-insensitive platforms?

Right. That kind of makes sense i suppose. (I really should find energy to kill last remnant of intrinsics and remove those files.)

With your imageio_rawspeed.cc patch, this now indeed fails to load as any other unknown camera, so good to go I guess...

Note to self: reopen PR before making changes to the branch. 🤦

Yup, github is best, isn't it?

Yup, github is best, isn't it?

It's not just a permission issue, I really have to create a new one?

It's not a permission issue, i regularly hit that too.

replace copy_n + reinterpret_cast with memcpy

These usages are not problematic, but they would be if alignment does not match. Also less verbose.

Signed-off-by: Rosen Penev

replace memmove with copy_n

They are equivalent.

Signed-off-by: Rosen Penev

replace reinterpret_cast with memcpy

Fixes cast-align warning on 32-bit.

Signed-off-by: Rosen Penev

remove warning supression

No longer relevant.

Signed-off-by: Rosen Penev

This is an automatic backport of pull request #2895 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 11 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (aefee11) 64.00%. Report is 26 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
src/jp2image.cpp 50.00% 1 Missing :warning:
src/jpgimage.cpp 75.00% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## 0.28.x #2898 +/- ##
=======================================
 Coverage 63.99% 64.00% 
=======================================
 Files 103 104 +1 
 Lines 22338 22360 +22 
 Branches 10821 10833 +12 
=======================================
+ Hits 14296 14311 +15 
- Misses 5818 5823 +5 
- Partials 2224 2226 +2 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Mitght be worth it. A lot of extra code though...

ping @wormnest

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 3 lines in your changes are missing coverage. Please review.

Comparison is base (57d9d2f) 63.90% compared to head (1000540) 63.93%.

Files Patch % Lines
src/value.cpp 86.36% 0 Missing and 3 partials :warning:
@@ Coverage Diff @@
## main #2897 +/- ##
==========================================
+ Coverage 63.90% 63.93% +0.03% 
==========================================
 Files 104 104 
 Lines 22389 22400 +11 
 Branches 10876 10877 +1 
==========================================
+ Hits 14308 14322 +14 
+ Misses 5857 5854 -3 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@neheb Thanks for working on this. Tested the patch and still seeing the slowdown. TimeValue::read in the same file is affected in the same way.

On a cursory glance, that regex is quite long. It would be difficult to remove.

Yes, understandable. GIMP may have to wait switching to UCRT64 until this is fixed upstream, or go for CLANG64 instead.

alternatively ask the GIMP developers to patch exiv2 to get rid of regex usage. I may take a crack at it at some point. Doubtful though.

It's very good that there are tests for this. Original version was horribly broken.

I am one of the GIMP devs 😃 However, I personally have little C++ experience and we are already spread thin with what we can handle and needing to release 3.0 in a reasonable time. Since MINGW64 is still working ok for us, I think it best we stay with that for now, even if it means not being able to use the latest version of exiv2.

I appreciate you having taken the time to look at, thanks.

Regex is a recent addition?

oh I see. regex.h was replaced with C++ regex.

regex.h was replaced with C++ regex

Yes, and there is unfortunately a bug in libstdc++ interaction w/ UCRT on Windows. CLANG64 is not affected as it uses libc++, and MINGW64 is not affected as it links to legacy MSVCRT instead.

Edit: Note that some say that moving away from a custom regex implementation was maybe not the best choice...

It might make sense to switch to https://github.com/hanickadot/compile-time-regular-expressions

Or at least reg rid of regex in src.

last regex removal in value.cpp

--- a/src/value.cpp
+++ b/src/value.cpp
@@ -9,7 +9,6 @@
 #include "types.hpp"

// + standard includes
-#include 
 #include

// *****************************************************************************
@@ -909,39 +908,66 @@ int TimeValue::read(const std::string& buf) {
 // https://web.archive.org/web/20171020084445/https://www.loc.gov/standards/datetime/ISO_DIS%208601-1.pdf
 // Not supported formats:
 // 4.2.2.4 Representations with decimal fraction: 232050,5
- static const std::regex re(R"(^(2[0-3]|[01][0-9]):?([0-5][0-9])?:?([0-5][0-9])?$)");
- static const std::regex reExt(
- R"(^(2[0-3]|[01][0-9]):?([0-5][0-9]):?([0-5][0-9])(Z|[+-](?:2[0-3]|[01][0-9])(?::?(?:[0-5][0-9]))?)$)");
-
- if (std::smatch sm; std::regex_match(buf, sm, re) || std::regex_match(buf, sm, reExt)) {
- time_.hour = sm.length(1) ? std::stoi(sm[1].str()) : 0;
- time_.minute = sm.length(2) ? std::stoi(sm[2].str()) : 0;
- time_.second = sm.length(3) ? std::stoi(sm[3].str()) : 0;
- if (sm.size() > 4) {
- std::string str = sm[4].str();
- const auto strSize = str.size();
- auto posColon = str.find(':');
-
- if (posColon == std::string::npos) {
- // Extended format
- time_.tzHour = std::stoi(str.substr(0, 3));
- if (strSize > 3) {
- int minute = std::stoi(str.substr(3));
- time_.tzMinute = time_.tzHour < 0 ? -minute : minute;
- }
- } else {
- // Basic format
- time_.tzHour = std::stoi(str.substr(0, posColon));
- int minute = std::stoi(str.substr(posColon + 1));
- time_.tzMinute = time_.tzHour < 0 ? -minute : minute;
- }
+ size_t minutePos = 2;
+ size_t secondPos = 4;
+
+ if (buf.size() < 2)
+ return 1;
+
+ if (buf.size() > 8 && buf.find('+') == std::string::npos)
+ return 1;
+
+ for (auto c : buf)
+ if (c != ':' && c != '+' && c != '-' && !std::isdigit(c))
+ return 1;
+
+ if (buf[2] == ':') {
+ minutePos = 3;
+ }
+
+ if (buf.size() > 4 && buf[5] == ':') {
+ secondPos = 6;
+ }
+
+ auto tz = buf.find('+');
+ if (tz == std::string::npos) {
+ auto tzn = buf.find('-');
+ if (tzn == std::string::npos) {
+ time_.tzHour = 0;
+ time_.tzMinute = 0;
+ } else {
+ auto tzs = buf.substr(tzn, buf.size() - tzn);
+ time_.tzHour = std::stoul(tzs.substr(1, 2));
+ if (tzs.size() > 3)
+ time_.tzMinute = -std::stoul(tzs.substr(4, 2));
+ else if (tzs.size() > 4)
+ time_.tzMinute = -std::stoul(tzs.substr(3, 2));
 }
- return 0;
+ } else {
+ auto tzs = buf.substr(tz, buf.size() - tz);
+ time_.tzHour = std::stoul(tzs.substr(1, 2));
+ if (tzs.size() > 3)
+ time_.tzMinute = std::stoul(tzs.substr(4, 2));
+ else if (tzs.size() > 4)
+ time_.tzMinute = std::stoul(tzs.substr(3, 2));
 }
+
+ auto h = std::stoi(buf.substr(0, 2));
+ if (h >= 24)
+ return 1;
+
+ time_.hour = h;
+ time_.minute = buf.size() > 3 ? std::stoul(buf.substr(minutePos, 2)) : 0;
+ time_.second = buf.size() > 5 ? std::stoul(buf.substr(secondPos, 2)) : 0;
+
 #ifndef SUPPRESS_WARNINGS
 EXV_WARNING << Error(ErrorCode::kerUnsupportedTimeFormat) << "\n";
 #endif
- return 1;
+
+ if (time_.hour >= 24 || time_.minute >= 60 || time_.second >= 60 || time_.tzHour >= 24 || time_.tzMinute >= 60)
+ return 1;
+
+ return 0;
 }

/// \todo not used internally. At least we should test it

still fails ATimeValue.canBeReadFromExtendedStringWithTimeZoneDesignatorNegative

Not sure what's wrong.

@mergify backport 0.28.x

backport 0.28.x

✅ Backports have been created

Fixes cast-align warning on 32-bit.

Review these changes using an interactive CodeSee Map

Legend

interesting. this is the only usage of

*reinterpret_cast

guess that's bad.

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (1290548) 63.90% compared to head (ba2f8cb) 63.90%. Report is 2 commits behind head on main.

Files Patch % Lines
src/http.cpp 0.00% 1 Missing :warning:
@@ Coverage Diff @@
## main #2896 +/- ##
=======================================
 Coverage 63.90% 63.90% 
=======================================
 Files 104 104 
 Lines 22389 22389 
 Branches 10876 10876 
=======================================
 Hits 14308 14308 
 Misses 5857 5857 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #608 from LebedevRI/next

Add support for unknown and unknown-no-samples support modes in cameras.xml

Merge pull request #609 from LebedevRI/cleanup

Small no-op cleanup

Extract common RawDecoder::handleCameraSupport out of checkCameraSupported() / setMetaData()

RawDecoder::handleCameraSupport(): beg for samples even in setMetaData()

checkCameraSupported() is not necessarily called.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (573d7c4) 59.15% compared to head (ad9b522) 59.15%.

@@ Coverage Diff @@
## develop #609 +/- ##
========================================
 Coverage 59.15% 59.15% 
========================================
 Files 251 251 
 Lines 14855 14855 
 Branches 1999 1999 
========================================
 Hits 8788 8788 
 Misses 5950 5950 
 Partials 117 117 
Flag Coverage Δ
benchmarks 10.49% <22.22%> (ø)
integration 46.65% <33.33%> (ø)
linux 56.88% <100.00%> (ø)
macOS 20.77% <85.71%> (ø)
rpu_u 46.65% <33.33%> (ø)
unittests 18.30% <86.66%> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

The idea is that we aren't likely to ever explicitly either support or unsupport every single camera ever released, and we must make that determination to enter a camera into cameras.xml, yet it is generally beneficial to list a camera in cameras.xml without making that determination, for example then RPU can ask for samples for that camera.

Therefore, we need new types of support= values in cameras.xml, ones that do not affect the support-ness, but allow to differentiate the existence of RPU samples.

Thus, patch.

CC @kmilos

Codecov Report

Attention: 10 lines in your changes are missing coverage. Please review.

Comparison is base (573d7c4) 59.15% compared to head (8b50aa4) 59.19%. Report is 3 commits behind head on develop.

Files Patch % Lines
src/librawspeed/decoders/RawDecoder.cpp 72.72% 9 Missing :warning:
src/librawspeed/metadata/Camera.cpp 80.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #608 +/- ##
===========================================
+ Coverage 59.15% 59.19% +0.03% 
===========================================
 Files 251 251 
 Lines 14855 14869 +14 
 Branches 1999 2001 +2 
===========================================
+ Hits 8788 8801 +13 
- Misses 5950 5952 +2 
+ Partials 117 116 -1 
Flag Coverage Δ
benchmarks 10.50% <10.52%> (+0.01%) :arrow_up:
integration 46.70% <77.77%> (+0.05%) :arrow_up:
linux 56.92% <73.68%> (+0.03%) :arrow_up:
macOS 20.77% <6.25%> (-0.01%) :arrow_down:
rpu_u 46.70% <77.77%> (+0.05%) :arrow_up:
unittests 18.30% <10.52%> (+<0.01%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Thanks, will be interesting to see how this plays out.

Merge pull request #607 from LebedevRI/unbuffering

Make Buffer (even thinner) wrapper for Array1DRef

FileWriter::writeFile(): get rid of pointless Buffer::getData()

This code is dead. Perhaps it should be removed?

Merge pull request #606 from LebedevRI/next

Add some more consistency to the 'array' wrappers

CroppedArray2DRef: only main ctor may member-initialize, others should delegate to it

CroppedArray1DRef: only main ctor may member-initialize, others should delegate to it

Array2DRef: only main ctor may member-initialize, others should delegate to it

Array1DRef: only main ctor may member-initialize, others should delegate to it

AbstractDngDecompressor: don't iterate over std::array, wrap it into Array1DRef

_GLIBCXX_DEBUG really does not like the original loop for some reason.

CMake: don't bother looking for specific OpenMP version

GCC13 still does not claim to support it, and we can't make it, unlike clang.

PhaseOneDecompressor: don't iterate over std::array, wrap it into Array1DRef

_GLIBCXX_DEBUG really does not like the original loop for some reason.

PanasonicV4Decompressor: don't iterate over std::array, wrap it into Array1DRef

_GLIBCXX_DEBUG really does not like the original loop for some reason.

CiffEntry: construct via a Create() method

Mainly, avoid implicitly initializing ByteStream

PanasonicV5Decompressor: don't iterate over std::array, wrap it into Array1DRef

_GLIBCXX_DEBUG really does not like the original loop for some reason.

Test to see if we can skip building the glib library and instead install it via choco

Codecov Report

Attention: 37 lines in your changes are missing coverage. Please review.

Comparison is base (d662550) 59.26% compared to head (94731c9) 59.17%.

Files Patch % Lines
...rawspeed/decompressors/AbstractDngDecompressor.cpp 30.76% 18 Missing :warning:
src/librawspeed/adt/Array2DRef.h 80.64% 6 Missing :warning:
...rawspeed/decompressors/PanasonicV5Decompressor.cpp 0.00% 5 Missing :warning:
src/librawspeed/adt/Array1DRef.h 83.33% 4 Missing :warning:
src/librawspeed/adt/CroppedArray2DRef.h 66.66% 2 Missing :warning:
src/librawspeed/tiff/CiffEntry.cpp 83.33% 1 Missing and 1 partial :warning:
@@ Coverage Diff @@
## develop #606 +/- ##
===========================================
- Coverage 59.26% 59.17% -0.10% 
===========================================
 Files 251 251 
 Lines 14849 14900 +51 
 Branches 2000 2000 
===========================================
+ Hits 8801 8817 +16 
- Misses 5931 5965 +34 
- Partials 117 118 +1 
Flag Coverage Δ
benchmarks 10.55% <38.21%> (-0.04%) :arrow_down:
integration 46.65% <81.00%> (-0.02%) :arrow_down:
linux 56.86% <80.19%> (-0.05%) :arrow_down:
macOS 20.75% <17.00%> (-0.12%) :arrow_down:
rpu_u 46.65% <81.00%> (-0.02%) :arrow_down:
unittests 18.31% <22.03%> (-0.17%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Description of the bug

OpenCL gets disabled after a few mouse wheel pan/zoom in Darkroom.

To Reproduce

  1. Open a raw image (without any modules other than the default pipeline) in Darkroom view.
  2. Start monitoring GPU memory usage, e.g. with nVidia cards use nvidia-smi -l.
  3. Mouse wheel zoom to 100% level in the picture.
  4. Mouse wheel zoom back to original zoom level (fit the screen).
  5. Pan picture (even if the picture fits the screen i.e. will not actually move).
  6. Repeat steps 3 and 5 and notice GPU memory growing.
  7. Depending on your GPU memory size, ansel will eventually disable OpenCL with message

ansel discovered problems with your OpenCL setup; disabling OpenCL for this session!

The only way to free GPU memory is to exit ansel, e.g. closing Darkroom does not free the GPU memory.

On my setup I see a single mouse wheel zoom to take a few 100MB GPU memory and long pan (e.g. whole screen width) upto 3GB in one go.

Expected behavior

OpenCL will not get disabled and GPU memory consumption would not grow.

Context

When OpenCL gets disabled ansel-cltest shows out of GPU memory error

% clinfo -l
Platform #0: NVIDIA CUDA
 `-- Device #0: NVIDIA GeForce RTX 3050 6GB Laptop GPU

% ./root/opt/ansel/bin/ansel-cltest
[dt_get_sysresource_level] switched to 1 as `default'
 total mem: 64031MB
 mipmap cache: 8003MB
 available mem: 32015MB
 singlebuff: 500MB
 OpenCL tune mem: OFF
 OpenCL pinned: OFF
[opencl_init] opencl related configuration options:
[opencl_init] opencl: ON
[opencl_init] opencl_scheduling_profile: 'default'
[opencl_init] opencl_library: 'default path'
[opencl_init] opencl_device_priority: '*/!0,*/*/*'
[opencl_init] opencl_mandatory_timeout: 400
[opencl_init] opencl_synch_cache: active module
[opencl_init] opencl library 'libOpenCL' found on your system and loaded
[opencl_init] found 1 platform
[opencl_init] found 1 device

[dt_opencl_device_init]
 DEVICE: 0: 'NVIDIA GeForce RTX 3050 6GB Laptop GPU'
 CANONICAL NAME: nvidiageforcertx30506gblaptopgpu
 PLATFORM NAME & VENDOR: NVIDIA CUDA, NVIDIA Corporation
 DRIVER VERSION: 535.113.01
 DEVICE VERSION: OpenCL 3.0 CUDA, SM_20 SUPPORT
 DEVICE_TYPE: GPU
 GLOBAL MEM SIZE: 5938 MB
 MAX MEM ALLOC: 1484 MB
 MAX IMAGE SIZE: 32768 x 32768
 MAX WORK GROUP SIZE: 1024
 MAX WORK ITEM DIMENSIONS: 3
 MAX WORK ITEM SIZES: [ 1024 1024 64 ]
 ASYNC PIXELPIPE: NO
 PINNED MEMORY TRANSFER: NO
 MEMORY TUNING: NO
 FORCED HEADROOM: 400
 AVOID ATOMICS: NO
 MICRO NAP: 250
 ROUNDUP WIDTH: 16
 ROUNDUP HEIGHT: 16
 CHECK EVENT HANDLES: 128
 PERFORMANCE: 0.896092
 DEFAULT DEVICE: NO
 *** could not create context *** CL_OUT_OF_HOST_MEMORY
[opencl_init] no suitable devices found.
[opencl_init] FINALLY: opencl is NOT AVAILABLE on this system.
[opencl_init] initial status of opencl enabled flag is OFF.

Same kind of GPU memory consumption happens with OpenCL on integrated Intel GPU. I don't know any good tool to measure Intel GPU memory consumption, but xorg kept growing (system memory usage went from 5GB to 45 GB with a dozen or so zoom/pan operations) and went back to normal only when ansel exited

% clinfo -l
Platform #0: Intel(R) OpenCL Graphics
 `-- Device #0: Intel(R) Iris(R) Xe Graphics

% ./root/opt/ansel/bin/ansel-cltest
[dt_get_sysresource_level] switched to 1 as `default'
 total mem: 64031MB
 mipmap cache: 8003MB
 available mem: 32015MB
 singlebuff: 500MB
 OpenCL tune mem: OFF
 OpenCL pinned: OFF
[opencl_init] opencl related configuration options:
[opencl_init] opencl: ON
[opencl_init] opencl_scheduling_profile: 'default'
[opencl_init] opencl_library: 'default path'
[opencl_init] opencl_device_priority: '*/!0,*/*/*'
[opencl_init] opencl_mandatory_timeout: 400
[opencl_init] opencl_synch_cache: active module
[opencl_init] opencl library 'libOpenCL' found on your system and loaded
[opencl_init] found 1 platform
[opencl_init] found 1 device

[dt_opencl_device_init]
 DEVICE: 0: 'Intel(R) Iris(R) Xe Graphics'
 CANONICAL NAME: intelririsrxegraphics
 PLATFORM NAME & VENDOR: Intel(R) OpenCL Graphics, Intel(R) Corporation
 DRIVER VERSION: 23.39.27427.23
 DEVICE VERSION: OpenCL 3.0 NEO
 DEVICE_TYPE: GPU
 GLOBAL MEM SIZE: 51225 MB
 MAX MEM ALLOC: 4096 MB
 MAX IMAGE SIZE: 16384 x 16384
 MAX WORK GROUP SIZE: 512
 MAX WORK ITEM DIMENSIONS: 3
 MAX WORK ITEM SIZES: [ 512 512 512 ]
 ASYNC PIXELPIPE: NO
 PINNED MEMORY TRANSFER: NO
 MEMORY TUNING: NO
 FORCED HEADROOM: 400
 AVOID ATOMICS: NO
 MICRO NAP: 250
 ROUNDUP WIDTH: 16
 ROUNDUP HEIGHT: 16
 CHECK EVENT HANDLES: 128
 PERFORMANCE: 0.634240
 DEFAULT DEVICE: NO
 KERNEL DIRECTORY: root/opt/ansel/share/ansel/kernels
 CL COMPILER OPTION: -cl-fast-relaxed-math
 KERNEL LOADING TIME: 0.0106 sec
[opencl_init] OpenCL successfully initialized.
[opencl_init] here are the internal numbers and names of OpenCL devices available to darktable:
[opencl_init] 0 'Intel(R) Iris(R) Xe Graphics'
[opencl_init] FINALLY: opencl is AVAILABLE on this system.
[opencl_init] initial status of opencl enabled flag is ON.
[dt_opencl_update_priorities] these are your device priorities:
[dt_opencl_update_priorities] image preview export thumbs
[dt_opencl_update_priorities] 0 -1 0 0
[dt_opencl_update_priorities] show if opencl use is mandatory for a given pixelpipe:
[dt_opencl_update_priorities] image preview export thumbs
[dt_opencl_update_priorities] 0 0 0 0
[opencl_synchronization_timeout] synchronization timeout set to 200

Which commit introduced the error

e2c4a0a60

System

  • ansel version : e2c4a0a60 2024-01-05T15:37:55+01:00 Fixed error and improved some strings
  • OS : Linux - kernel 6.5.0
  • Linux - Distro : Tuxedo 22.04 (~= Ubuntu 22.04)
  • Memory : 64GB
  • Graphics card : NVIDIA GeForce RTX 3050 6GB Laptop GPU + Intel(R) Iris(R) Xe Graphics
  • Graphics driver : OpenCL 3.0 CUDA, 535.113.01 + OpenCL 3.0 NEO, 23.39.27427.23
  • OpenCL installed : yes
  • OpenCL activated : yes
  • Xorg : 7.7+23ubuntu2
  • Desktop : KDE Plasma Version: 5.27.8
  • GTK+ : 3.24.33-1ubuntu2
  • gcc : gcc version 12.3.0 (Ubuntu 12.3.0-1ubuntu1~22.04)
  • cflags : defaults
  • CMAKE_BUILD_TYPE : Release

Additional context

Intel OpenCL stack is from

https://github.com/intel/compute-runtime/releases/tag/23.39.27427.23 and https://github.com/intel/intel-graphics-compiler/releases/tag/igc-1.0.15136.22

This issue seems to be due to restarting pixel pipeline and not releasing cl_mem_output buffer if restart is requested (e.g. due to two zoom-in commands in rapid succession). Adding a conditional release for it in the two KILL_SWITCH macros seems to fix the issue. With this applied my GPU memory usage is stabilized.

I will create a pull request for this...

What is the current problem you are facing ? I always import via add to library, excluding JPGs that are still present in the directory. I regularly destroy rejected images in the light table. The directory cleanup must include the relevant JPGs.

Where in your workflow does your problem occur ? Once the RAWs have been eliminated, a little dance in the terminal would then allow me to destroy the concerned JPGs : diff <( ls *{ARW,CR2,CR3,NEF} 2>/dev/null | cut -f1 -d. ) \ <( ls *JPG | cut -f1 -d. ) \ | sed -nE "/^> (.*)/{s//\1.JPG/;p}" \ | xargs rm Having a list of rejected RAWs would allow me to easily remove the relevant JPGs from the directory.

Other examples could be imagined, but there's nothing very urgent or essential here.

Kindest regards

Sigma and Tokina

Review these changes using an interactive CodeSee Map

Legend

@mergify backport 0.28.x

backport 0.28.x

✅ Backports have been created

Seems to be just a display refresh: https://petapixel.com/2024/01/11/the-lightly-upgraded-panasonic-g100d-is-coming-to-the-u-s/

Merge pull request #2152 from piratenpanda/patch-2

Add camera for later reference

Merge pull request #2151 from piratenpanda/patch-1

Add the Canon RF 135mm F1.8L IS USM

In the event of possible future problems, entering the camera model used can sometimes be helpful.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

lensfun

/

lensfun

Public

Notifications

Fork 185

Star 577

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

Add vignetting data for the Canon RF 14-35 #2150

Draft

piratenpanda

wants to merge 1 commit into

lensfun :master

Could not load branches

Branch not found: {{ refName }}

Could not load tags

Nothing to show

Are you sure you want to change the base?

Some commits from the old base branch may be removed from the timeline, and old review comments may become outdated.

from

piratenpanda :master

Draft

Add vignetting data for the Canon RF 14-35

2150

piratenpanda

wants to merge 1 commit into

lensfun :master

from

piratenpanda :master

Conversation

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Copy link

Contributor

piratenpanda

commented

Jan 13, 2024

Taken with a Gerd Neumann flatfield mask

piratenpanda

marked this pull request as draft

January 13, 2024 18:55

Copy link

Contributor

Author

piratenpanda

commented

Jan 13, 2024

still not completely happy

Copy link

Contributor

Author

piratenpanda

commented

Jan 14, 2024

at f4 it's really hard to not have circular residue in the image.

Fit looks like this:

Any ideas?

Copy link

Contributor

Author

piratenpanda

commented

Jan 14, 2024

tried again with my 120cm softbox and opal white (milky) acryl glass, same results. So I suggest to use these values until someone comes up with a better idea

piratenpanda

marked this pull request as ready for review

January 14, 2024 13:37

Copy link

Contributor

cytrinox

commented

Jan 14, 2024

@piratenpanda Can you share (all) your flats with me? I've calibrated my RF-14-35 long time ago but it was not perfect, so I've never published it (minimal dark edges at 14mm). But I remember to hit the same ringing effect but it has gone after changing the exposure time for new set of flats.

I want to compare your flats againts my profile - if it has no ringing I think it's better to use my profile :-)

Copy link

Contributor

cytrinox

commented

Jan 14, 2024

FYI: my profile: rf14_35.zip

Copy link

Contributor

Author

piratenpanda

commented

Jan 14, 2024

with your profile it's basically a bigger circle and more falloff to the edges:

This lens is a bit annoying..

the file: https://www.pandainthecloud.de/nextcloud/index.php/s/XSNbHY7sCgsY26y

Copy link

Contributor

Author

piratenpanda

commented

Jan 14, 2024

this is the best I can do:

with

and a new file https://www.pandainthecloud.de/nextcloud/index.php/s/t7xCmGYCHCtt2dH

green channel always suffers on the edges but otherwise I guess it's ok. I'll redo the shots this week hopefully and check again if the other focal lengths are also better when being just slightly below overexposing

piratenpanda

marked this pull request as draft

January 14, 2024 19:18

Sign up for free to join this conversation on GitHub . Already have an account? Sign in to comment

Labels

None yet

2 participants

Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.

You can’t perform that action at this time.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

lensfun

/

lensfun

Public

Notifications

Fork 185

Star 577

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

Add vignetting data for the Canon RF 14-35 #2150

Draft

piratenpanda

wants to merge 1 commit into

lensfun :master

Could not load branches

Branch not found: {{ refName }}

Could not load tags

Nothing to show

Are you sure you want to change the base?

Some commits from the old base branch may be removed from the timeline, and old review comments may become outdated.

from

piratenpanda :master

Draft

Add vignetting data for the Canon RF 14-35

2150

Commits

Show all changes

1 commit

Select commit

Add vignetting data for the Canon RF 14-35

piratenpanda Jan 13, 2024

Clear filters

Failed to load comments.

Jump to file

Failed to load files.

Diff view

Diff view

There are no files selected for viewing

100 changes: 100 additions & 0 deletions

100

data/db/mil-canon.xml

Show comments

View file

Open in desktop

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -1311,6 +1311,106 @@

Expand Down

Oops, something went wrong.

Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.

You can’t perform that action at this time.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (28fa956) 64.57% compared to head (cd963b0) 63.93%. Report is 2 commits behind head on main.

:exclamation: Current head cd963b0 differs from pull request most recent head 94c4ce7. Consider uploading reports for the commit 94c4ce7 to get more accurate results

@@ Coverage Diff @@
## main #2894 +/- ##
==========================================
- Coverage 64.57% 63.93% -0.64% 
==========================================
 Files 104 104 
 Lines 22196 22400 +204 
 Branches 10882 10877 -5 
==========================================
- Hits 14332 14322 -10 
- Misses 5622 5854 +232 
+ Partials 2242 2224 -18 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

hrm no new failures. Maybe some will pop up if I apply this to the CMake tests.

I hate conan.

2024-01-19T03:18:41.1850664Z FAIL: test_run (github.test_CVE_2018_12265.AdditionOverflowInLoaderExifJpeg.test_run)
2024-01-19T03:18:41.1851309Z ----------------------------------------------------------------------
2024-01-19T03:18:41.1851701Z Traceback (most recent call last):
2024-01-19T03:18:41.1852232Z File "/home/runner/work/exiv2/exiv2/tests/system_tests.py", line 652, in test_run
2024-01-19T03:18:41.1852852Z self.compare_stderr(i, command, processed_stderr, stderr)
2024-01-19T03:18:41.1853490Z File "/home/runner/work/exiv2/exiv2/tests/system_tests.py", line 773, in compare_stderr
2024-01-19T03:18:41.1854039Z self._compare_output(
2024-01-19T03:18:41.1854539Z File "/home/runner/work/exiv2/exiv2/tests/system_tests.py", line 745, in _compare_output
2024-01-19T03:18:41.1855098Z self.assertMultiLineEqual(
2024-01-19T03:18:41.1855909Z AssertionError: 'Erro[319 chars]ata area; ignored.\n' != 'Erro[319 chars]ata area; ignored.\nUncaught exception: Overflow in addition\n'
2024-01-19T03:18:41.1857144Z Error: Upper boundary of data for directory Image, entry 0x00fe is out of bounds: Offset = 0x0000002a, size = 64, exceeds buffer size by 22 Bytes; truncating the entry
2024-01-19T03:18:41.1858173Z Warning: Directory Image, entry 0x0201: Strip 0 is outside of the data area; ignored.
2024-01-19T03:18:41.1858879Z Warning: Directory Image, entry 0x0201: Strip 7 is outside of the data area; ignored.
2024-01-19T03:18:41.1859431Z + Uncaught exception: Overflow in addition
2024-01-19T03:18:41.1859784Z : Standard error does not match

ping @kevinbackhouse

nvm, that's probably a bug.

I am unsure how to proceed, hence in the 6x6.xml the lens was already present, although, with another crop factor (0.577), mine is 0.644; for the full-frame 645 IQ180 digital back.

Mamiya 35mm 3.5.zip

Closed for the same reason as #2148.

I would like to add your data to the db, but personally I only merge calibration data when I have been able to test it on some images. Unfortunately, I could not find any raw images for the 80mm lens. So it would be nice if you could upload some pictures here. 2 normal shots and 2 shots you used for calibration would be enough.

Understood. However, I will close my pull request, since I made an error while calculating the crop factor of the IQ180 digital back, which took the images. For the fear of that trickling over into distortion calculations and possibly from there, again downstream, into TCA, I’ll redo the entire batch.

Once I reopen the pull request, I’ll upload images for verification immediately.

Ok, many thanks for the Info!

This is an automatic backport of pull request #2886 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 11 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (c2e7fcc) 64.00%. Report is 25 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
src/jp2image.cpp 50.00% 1 Missing :warning:
src/jpgimage.cpp 75.00% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## 0.28.x #2893 +/- ##
=======================================
 Coverage 63.99% 64.00% 
=======================================
 Files 103 104 +1 
 Lines 22338 22360 +22 
 Branches 10821 10833 +12 
=======================================
+ Hits 14296 14311 +15 
- Misses 5818 5823 +5 
- Partials 2224 2226 +2 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Synchronizes across exiftool and exiv2

NB: To be merged when https://github.com/Exiv2/exiv2/pull/2892 gets released.

See also https://github.com/lensfun/lensfun/issues/2146.

Exiv2 0.28.2 containing these changes has now been released.

This is an automatic backport of pull request #2889 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 11 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (fedfd82) 64.00%. Report is 23 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
src/jp2image.cpp 50.00% 1 Missing :warning:
src/jpgimage.cpp 75.00% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## 0.28.x #2892 +/- ##
=======================================
 Coverage 63.99% 64.00% 
=======================================
 Files 103 104 +1 
 Lines 22338 22360 +22 
 Branches 10821 10833 +12 
=======================================
+ Hits 14296 14311 +15 
- Misses 5818 5823 +5 
- Partials 2224 2226 +2 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #604 from LebedevRI/next

And some more ArrayRef touchups

This is an automatic backport of pull request #2890 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 10 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (77b66ca) 63.99%. Report is 19 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
src/jp2image.cpp 50.00% 1 Missing :warning:
@@ Coverage Diff @@
## 0.28.x #2891 +/- ##
==========================================
- Coverage 63.99% 63.99% -0.01% 
==========================================
 Files 103 103 
 Lines 22338 22352 +14 
 Branches 10821 10829 +8 
==========================================
+ Hits 14296 14304 +8 
- Misses 5818 5823 +5 
- Partials 2224 2225 +1 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Workaround cmake-3.28 .modmap hell

See: * https://discourse.cmake.org/t/how-to-control-the-location-of-the-c-20-binary-module-interface-bmi-output-directory/7968/13 * https://gitlab.kitware.com/cmake/cmake/-/merge_requests/9100 * https://github.com/chromium/subspace/commit/b431afc30b606944735c2506ea6975f058034361

Revert "VariableLenghtBenchmark: further reduce size for CI"

This reverts commit 16028bf6d9e5347fa1576085e1b88b70e0c262c7.

Fixes #2884

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (52d4451) 63.90% compared to head (e5133a2) 63.90%.

Files Patch % Lines
src/jp2image.cpp 50.00% 1 Missing :warning:
@@ Coverage Diff @@
## main #2890 +/- ##
=======================================
 Coverage 63.90% 63.90% 
=======================================
 Files 104 104 
 Lines 22389 22389 
 Branches 10876 10876 
=======================================
 Hits 14308 14308 
 Misses 5857 5857 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@mergify backport 0.28.x

backport 0.28.x

✅ Backports have been created

Merge pull request #603 from LebedevRI/no-defaultinit

*ArrayRef.D shall not be default-constructible

VC5Decompressor: only create BandData in the end

It would default-init Array2DRef, and that's not great

Cr2Decoder::decodeMetaDataInternal(): don't default-init blackLevelSeparate's Array2DRef

FujiDecompressor: fuji_compressed_block::reset() we already have fuji_compressed_params

This way, the establishClassInvariants() actually always hold for any constructed type.

Merge pull request #602 from LebedevRI/clang-tidy-rawspeed-no-std-optional-check

rawspeed-no-std-optional check

X-Ref: https://github.com/darktable-org/rawspeed-clang-tidy-module/pull/6 X-Ref: https://github.com/darktable-org/rawspeed/pull/599

rawspeed-no-std-optional check

X-Ref: https://github.com/darktable-org/rawspeed-clang-tidy-module/pull/6 X-Ref: https://github.com/darktable-org/rawspeed/pull/599

Merge pull request #601 from LebedevRI/opensuse

Another boring opensuse mirror shuffling

Revert "provo-mirror.opensuse.org is broken, is there a better one?"

This reverts commit f110b3b71f6c9707b46f1b9d1e04c1dd351dbb1c.

Hello, I am using the Tamron 18-400mm lens on a Nikon D7500, and it is not recognized in darktable because the lensfun entry is "Tamron 18-400mm f/3.5-6.3 Di II VC HLD (B028)" and the EXIF of my pictures say "Tamron AF 18-400mm f/3.5-6.3 Di II VC HLD (B028)". I do not think those are different lenses, but I am unsure why the tests, which have apparently been done on a nikon D750, would have a different ID. I was about to duplicate the block with a slightly different name in the db, but I am opening this issue instead in case there is a more elegant solution.

Shall we just remove the the "AF" from the exiv2 internal translation table?

It's not there in the exiftool one either, and the Tamron product page also doesn't have it...

In the meantime, one could work around for darktable (or any other exiv2 client) w/ overriding the string in the user .exiv2/exiv2.ini config file.

Shall we just remove the the "AF" from the exiv2 internal translation table?

It's not there in the exiftool one either, and the Tamron product page also doesn't have it...

In the meantime, one could work around for darktable (or any other exiv2 client) w/ overriding the string in the user .exiv2/exiv2.ini config file.

Thanks, it seems to work indeed (though darktable seems to bake the first detected one into the xmp file, which means I have to open a new file, and also it displays as "Tamron, Tamron 18-400… etc" which looks a bit weird, but a least corrections work).

though darktable seems to bake the first detected one into the xmp file

You can force refresh the existing metadata: in lighttable view, selected files on the left panel somewhere...

@kmilos Here you can find pictures taken with a Canon EOS 80D. Neither exiv2 nor exiftool lists an "AF". We could add an "AF" in lensfun. Recognition would then work for both Id's. But a change in exiv2 would probably be better.

@tuxfanx exiv2 uses different internal lens tables for Canon vs Nikon bodies (unfortunately), and so does exiftool. (Yes, it was not an ideal design decision.)

So I think we'd better remove the "AF" from the Nikon one in exiv2. That way, this lens string is aligned for any Canon/Nikon and exiv2/exiftool combo.

P.S. There's a couple more like this that lensfun would actually need to update, but let's first wait until the exiv2 change is accepted and published.

Ok, thanks for the info.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (52d4451) 63.90% compared to head (bf50bc1) 63.90%.

@@ Coverage Diff @@
## main #2889 +/- ##
=======================================
 Coverage 63.90% 63.90% 
=======================================
 Files 104 104 
 Lines 22389 22389 
 Branches 10876 10876 
=======================================
 Hits 14308 14308 
 Misses 5857 5857 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@mergify backport 0.28.x

backport 0.28.x

✅ Backports have been created

Backport of https://github.com/Exiv2/exiv2/pull/2874 and an alternative to https://github.com/Exiv2/exiv2/pull/2887 due to the merge conflicts

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 10 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (1b257f6) 64.00%. Report is 19 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
src/jpgimage.cpp 75.00% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## 0.28.x #2888 +/- ##
=======================================
 Coverage 63.99% 64.00% 
=======================================
 Files 103 104 +1 
 Lines 22338 22360 +22 
 Branches 10821 10833 +12 
=======================================
+ Hits 14296 14311 +15 
- Misses 5818 5823 +5 
- Partials 2224 2226 +2 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #599 from LebedevRI/optional

Ban std::optional

GCC: disable -Wmaybe-uninitialized

build-GCC13-release$ /usr/bin/g++-13 -DNDEBUG -I/home/lebedevri/rawspeed/build-GCC13-release/src -I/home/lebedevri/rawspeed/src/librawspeed/decoders/.. -isystem /home/lebedevri/rawspeed/src/external -Wall -Wextra -Wcast-qual -Wextra -Wextra-semi -Wformat=2 -Wpointer-arith -Wvla -Wmissing-format-attribute -Wsuggest-attribute=format -Wno-unused-parameter -Wno-stringop-overflow -Wno-array-bounds -Wstack-usage=4096 -Wframe-larger-than=4096 -Wlarger-than=32768 -Werror -O3 -DNDEBUG -O3 -std=c++20 -fPIC -fvisibility=hidden -fvisibility-inlines-hidden -march=native -g3 -ggdb3 -fopenmp -MD -MT src/librawspeed/decoders/CMakeFiles/rawspeed_decoders.dir/Cr2Decoder.cpp.o -MF src/librawspeed/decoders/CMakeFiles/rawspeed_decoders.dir/Cr2Decoder.cpp.o.d -o src/librawspeed/decoders/CMakeFiles/rawspeed_decoders.dir/Cr2Decoder.cpp.o -c /home/lebedevri/rawspeed/src/librawspeed/decoders/Cr2Decoder.cpp
In function ‘rawspeed::Optional > > rawspeed::{anonymous}::deduceColorDataFormat(const rawspeed::TiffEntry*)’,
 inlined from ‘bool rawspeed::Cr2Decoder::decodeCanonColorData() const’ at /home/lebedevri/rawspeed/src/librawspeed/decoders/Cr2Decoder.cpp:389:35:
/home/lebedevri/rawspeed/src/librawspeed/decoders/Cr2Decoder.cpp:246:46: error: ‘((int*)((char*)& + offsetof(rawspeed::Optional::ColorDataFormat, rawspeed::Optional > >,rawspeed::Optional::ColorDataFormat, rawspeed::Optional > >::impl.std::optional::ColorDataFormat, rawspeed::Optional > >::.std::_Optional_base::ColorDataFormat, rawspeed::Optional >, true, true>::_M_payload.std::_Optional_payload::ColorDataFormat, rawspeed::Optional >, true, false, false>::.std::_Optional_payload_base::ColorDataFormat, rawspeed::Optional > >::_M_payload)))[1]’ may be used uninitialized [-Werror=maybe-uninitialized]
 246 | return {{ColorDataFormat::ColorData1, {}}};
 | ^
/home/lebedevri/rawspeed/src/librawspeed/decoders/Cr2Decoder.cpp: In member function ‘bool rawspeed::Cr2Decoder::decodeCanonColorData() const’:
/home/lebedevri/rawspeed/src/librawspeed/decoders/Cr2Decoder.cpp:246:46: note: ‘’ declared here
 246 | return {{ColorDataFormat::ColorData1, {}}};
 | ^
In function ‘rawspeed::Optional > > rawspeed::{anonymous}::deduceColorDataFormat(const rawspeed::TiffEntry*)’,
 inlined from ‘bool rawspeed::Cr2Decoder::decodeCanonColorData() const’ at /home/lebedevri/rawspeed/src/librawspeed/decoders/Cr2Decoder.cpp:389:35:
/home/lebedevri/rawspeed/src/librawspeed/decoders/Cr2Decoder.cpp:249:46: error: ‘((int*)((char*)& + offsetof(rawspeed::Optional::ColorDataFormat, rawspeed::Optional > >,rawspeed::Optional::ColorDataFormat, rawspeed::Optional > >::impl.std::optional::ColorDataFormat, rawspeed::Optional > >::.std::_Optional_base::ColorDataFormat, rawspeed::Optional >, true, true>::_M_payload.std::_Optional_payload::ColorDataFormat, rawspeed::Optional >, true, false, false>::.std::_Optional_payload_base::ColorDataFormat, rawspeed::Optional > >::_M_payload)))[1]’ may be used uninitialized [-Werror=maybe-uninitialized]
 249 | return {{ColorDataFormat::ColorData2, {}}};
 | ^
/home/lebedevri/rawspeed/src/librawspeed/decoders/Cr2Decoder.cpp: In member function ‘bool rawspeed::Cr2Decoder::decodeCanonColorData() const’:
/home/lebedevri/rawspeed/src/librawspeed/decoders/Cr2Decoder.cpp:249:46: note: ‘’ declared here
 249 | return {{ColorDataFormat::ColorData2, {}}};
 | ^

std::optional does not generally verify that it has value, and while such checks are optionally provided by some STL's, they are clearly not enabled unless one goes really out of the way, and even oss-fuzz does not do that.

That hidden a few (already-fixed by now) bugs. Let's just do the right thing.

Codecov Report

Attention: 9 lines in your changes are missing coverage. Please review.

Comparison is base (fb84d41) 59.16% compared to head (acd5a0f) 59.25%.

Files Patch % Lines
src/librawspeed/adt/Optional.h 85.29% 5 Missing :warning:
fuzz/librawspeed/codes/PrefixCodeDecoder/Common.h 0.00% 1 Missing :warning:
...z/librawspeed/decompressors/PentaxDecompressor.cpp 0.00% 1 Missing :warning:
src/librawspeed/decoders/Cr2Decoder.cpp 0.00% 1 Missing :warning:
src/librawspeed/decoders/NakedDecoder.cpp 0.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #599 +/- ##
===========================================
+ Coverage 59.16% 59.25% +0.09% 
===========================================
 Files 250 251 +1 
 Lines 14806 14840 +34 
 Branches 2000 1999 -1 
===========================================
+ Hits 8760 8794 +34 
- Misses 5924 5929 +5 
+ Partials 122 117 -5 
Flag Coverage Δ
benchmarks 10.47% <15.51%> (+0.03%) :arrow_up:
integration 46.68% <75.86%> (+0.09%) :arrow_up:
linux 56.92% <83.05%> (+0.09%) :arrow_up:
macOS 20.77% <29.72%> (+0.03%) :arrow_up:
rpu_u 46.68% <75.86%> (+0.09%) :arrow_up:
unittests 18.40% <32.20%> (+0.08%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Change Optional into a (forwarding) implementation

std::optional does not generally verify that it has value, and while such checks are optionally provided by some STL's, they are clearly not enabled unless one goes really out of the way, and even oss-fuzz does not do that.

That hidden a few (already-fixed by now) bugs. Let's just do the right thing.

This is an automatic backport of pull request #2874 done by Mergify. Cherry-pick of 6b5cb98411bad3557cf07a6d8772ce5ea2a784bf has failed:

On branch mergify/bp/0.28.x/pr-2874
Your branch is ahead of 'origin/0.28.x' by 1 commit.
 (use "git push" to publish your local commits)

You are currently cherry-picking commit 6b5cb9841.
 (fix conflicts and run "git cherry-pick --continue")
 (use "git cherry-pick --skip" to skip this patch)
 (use "git cherry-pick --abort" to cancel the cherry-pick operation)

Changes to be committed:

new file: samples/jpegparsetest.cpp

new file: tests/bash_tests/test_jpegparse.py

modified: tests/suite.conf

Unmerged paths:
 (use "git add ..." to mark resolution)

both modified: samples/CMakeLists.txt

To fix up this pull request, you can check it out locally. See documentation: https://docs.github.com/en/github/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/checking-out-pull-requests-locally


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

jpeg: add encodingProcess and num_color_components SOF members

Merge remote-tracking branch 'upstream/pr/598' into develop

  • upstream/pr/598: CroppedArray2DRef(): cropped dimensions shouldn't be too large either CroppedArray2DRef: extract class-global invariants into establishClassInvariants(), call it Array2DRef: call establishClassInvariants() in "every" method Array2DRef: extract class-global invariants into establishClassInvariants() CroppedArray1DRef: call establishClassInvariants() in every method CroppedArray1DRef: extract class-global invariants into establishClassInvariants() Array1DRef: extract class-global invariants into establishClassInvariants(), call it in every method CroppedArray1DRef: numElts shouldn't be too large by itself too Array1DRef::operator(): is only ever called on a valid Array1DRef Array1DRef::addressOf(): is only ever called on a valid Array1DRef Array1DRef::size(): is only ever called on a valid Array1DRef Array1DRef::getCrop(): size shouldn't be too large by itself too Array1DRef::getCrop(): only ever called on a valid Array1DRef Array1DRef: we are never cted with a nullptr. Array2DRef::getAsArray1DRef(): data must be non-null for us to succeed rstest: blackLevelSeparate.getAsArray1DRef() might theoretically return none Array2DRef: we never actually create it with a nullptr, so let's ban that

CroppedArray2DRef: extract class-global invariants into establishClassInvariants(), call it

CroppedArray1DRef: extract class-global invariants into establishClassInvariants()

Array1DRef: extract class-global invariants into establishClassInvariants(), call it in every method

IiqDecoder::decodeRawInternal(): do check that the raw block for strips is found

Resolves https://github.com/Exiv2/exiv2/issues/2877

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (349c1f2) 63.89% compared to head (464c9b2) 63.89%.

@@ Coverage Diff @@
## main #2886 +/- ##
=======================================
 Coverage 63.89% 63.89% 
=======================================
 Files 103 103 
 Lines 22381 22381 
 Branches 10872 10872 
=======================================
 Hits 14301 14301 
 Misses 5857 5857 
 Partials 2223 2223 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@mergify backport 0.28.x

backport 0.28.x

✅ Backports have been created

This is an automatic backport of pull request #2881 done by Mergify. Cherry-pick of 4b3ce1d343316edeb62a338983ab6f3431cdf19d has failed:

On branch mergify/bp/0.28.x/pr-2881
Your branch is up to date with 'origin/0.28.x'.

You are currently cherry-picking commit 4b3ce1d34.
 (fix conflicts and run "git cherry-pick --continue")
 (use "git cherry-pick --skip" to skip this patch)
 (use "git cherry-pick --abort" to cancel the cherry-pick operation)

Unmerged paths:
 (use "git add ..." to mark resolution)

both modified: src/canonmn_int.cpp

no changes added to commit (use "git add" and/or "git commit -a")

Cherry-pick of 724b7f85f41f390cb6d93ba3be40ccbd7f89e281 has failed:

On branch mergify/bp/0.28.x/pr-2881
Your branch is ahead of 'origin/0.28.x' by 1 commit.
 (use "git push" to publish your local commits)

You are currently cherry-picking commit 724b7f85f.
 (fix conflicts and run "git cherry-pick --continue")
 (use "git cherry-pick --skip" to skip this patch)
 (use "git cherry-pick --abort" to cancel the cherry-pick operation)

Unmerged paths:
 (use "git add ..." to mark resolution)

both modified: src/canonmn_int.cpp

no changes added to commit (use "git add" and/or "git commit -a")

To fix up this pull request, you can check it out locally. See documentation: https://docs.github.com/en/github/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/checking-out-pull-requests-locally


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 9 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (e20bba5) 63.99%. Report is 16 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## 0.28.x #2885 +/- ##
==========================================
- Coverage 63.99% 63.99% -0.01% 
==========================================
 Files 103 103 
 Lines 22338 22352 +14 
 Branches 10821 10829 +8 
==========================================
+ Hits 14296 14304 +8 
- Misses 5818 5823 +5 
- Partials 2224 2225 +1 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #597 from LebedevRI/unbuffering

Use Array1DRef in BitStream / untangle Buffer/ByteStream some more

Codecov Report

Attention: 32 lines in your changes are missing coverage. Please review.

Comparison is base (0a5a670) 59.11% compared to head (40c2645) 59.07%.

Files Patch % Lines
src/librawspeed/decoders/NefDecoder.cpp 0.00% 5 Missing :warning:
src/librawspeed/io/BitPumpJPEG.h 0.00% 4 Missing :warning:
src/librawspeed/io/BitPumpMSB.h 0.00% 3 Missing :warning:
src/librawspeed/io/BitPumpMSB32.h 0.00% 3 Missing :warning:
src/librawspeed/io/ByteStream.h 85.71% 3 Missing :warning:
fuzz/librawspeed/codes/PrefixCodeDecoder/Dual.cpp 0.00% 2 Missing :warning:
src/librawspeed/decoders/OrfDecoder.cpp 0.00% 2 Missing :warning:
src/librawspeed/io/BitStream.h 84.61% 2 Missing :warning:
src/librawspeed/tiff/TiffIFD.h 0.00% 2 Missing :warning:
fuzz/librawspeed/codes/PrefixCodeDecoder/Solo.cpp 0.00% 1 Missing :warning:
... and 5 more
@@ Coverage Diff @@
## develop #597 +/- ##
===========================================
- Coverage 59.11% 59.07% -0.05% 
===========================================
 Files 250 250 
 Lines 14773 14775 +2 
 Branches 2000 1998 -2 
===========================================
- Hits 8733 8728 -5 
- Misses 5919 5926 +7 
 Partials 121 121 
Flag Coverage Δ
benchmarks 10.26% <27.94%> (+0.01%) :arrow_up:
integration 46.49% <64.03%> (+0.07%) :arrow_up:
linux 56.76% <78.63%> (-0.02%) :arrow_down:
macOS 20.60% <26.82%> (-0.09%) :arrow_down:
rpu_u 46.49% <64.03%> (+0.07%) :arrow_up:
unittests 18.24% <26.11%> (-0.18%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #2143 from Macchiato17/master

Vignetting correction for Sigma 18-250mm f/3.5-6.3 DC OS Macro HSM and Sigma 24mm F1.4 DG HSM | A

variableLengthLoadNaiveViaMemcpy(): NFC, really establish invariants

Does not change final assembly, but is cleaner.

Merge remote-tracking branch 'upstream/pr/595' into develop

  • upstream/pr/595: Drop variableLengthLoadNaiveViaStdCopy() variableLengthLoadNaiveViaMemcpy(): rewrite without pointer arithmetic Make variableLengthLoad() actually work on big-endian machines Implement proper (best?) variableLengthLoad Also benchmark the perf of the actual case Also benchmark the perf of the best-case path GCC: disable -Wstringop-overflow=/-Warray-bounds=, produces bougs warnings Add alternative, variableLengthLoadNaiveViaStdCopy Add alternative, simpler variableLengthLoadNaiveViaConditionalLoad implementation Add test infra for VariableLengthLoad Add basic benchmark infra for the VariableLengthLoad Extract variableLengthLoadNaiveViaMemcpy() out of BitStreamForwardSequentialReplenisher::getInput()

Drop variableLengthLoadNaiveViaStdCopy()

std::copy() needs a pointer difference to compute the size, and while std::copy_n() helps with that, it still does not get optimized into a simple memcpy(), but ends up being conditional.

3 different implementations is plenty enough.

variableLengthLoadNaiveViaMemcpy(): rewrite without pointer arithmetic

inPos may be past-the end, and that's UB. Funnily-enough, this happens to be faster: (one less register used, that had to be spill-reloaded)

build-Clang17-release$ /repositories/googlebenchmark/tools/compare.py -a benchmarks bench/librawspeed/adt/VariableLengthLoadBenchmark{-old,} --benchmark_repetitions=100000 --benchmark_filter="fixedLengthLoadOr.*variableLengthLoadNaiveViaMemcpy.*uint[36][24]_t" --benchmark_min_time=1x --benchmark_min_warmup_time=0.5
RUNNING: bench/librawspeed/adt/VariableLengthLoadBenchmark-old --benchmark_repetitions=100000 --benchmark_filter=fixedLengthLoadOr.*variableLengthLoadNaiveViaMemcpy.*uint[36][24]_t --benchmark_min_time=1x --benchmark_min_warmup_time=0.5 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmpg4dbf8dr
2024-01-07T20:35:27+03:00
Running bench/librawspeed/adt/VariableLengthLoadBenchmark-old
Run on (32 X 3400 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 0.82, 1.15, 1.32
-----------------------------------------------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
-----------------------------------------------------------------------------------------------------------------------------------------------
BM_Impl, uint32_t>/524288_mean 116 us 116 us 100000 Latency=220.58ps Throughput=4.23082Gi/s
BM_Impl, uint32_t>/524288_median 116 us 116 us 100000 Latency=221.329ps Throughput=4.20787Gi/s
BM_Impl, uint32_t>/524288_stddev 4.36 us 4.35 us 100000 Latency=8.28896ps Throughput=236.877Mi/s
BM_Impl, uint32_t>/524288_cv 3.77 % 3.76 % 100000 Latency=3.76% Throughput=5.47%
BM_Impl, uint64_t>/524288_mean 39.0 us 39.0 us 100000 Latency=74.3106ps Throughput=12.534Gi/s
BM_Impl, uint64_t>/524288_median 38.9 us 38.9 us 100000 Latency=74.2531ps Throughput=12.5425Gi/s
BM_Impl, uint64_t>/524288_stddev 0.449 us 0.409 us 100000 Latency=780.46fs Throughput=109.895Mi/s
BM_Impl, uint64_t>/524288_cv 1.15 % 1.05 % 100000 Latency=1.05% Throughput=0.86%
RUNNING: bench/librawspeed/adt/VariableLengthLoadBenchmark --benchmark_repetitions=100000 --benchmark_filter=fixedLengthLoadOr.*variableLengthLoadNaiveViaMemcpy.*uint[36][24]_t --benchmark_min_time=1x --benchmark_min_warmup_time=0.5 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmplk2hmw4g
2024-01-07T20:35:48+03:00
Running bench/librawspeed/adt/VariableLengthLoadBenchmark
Run on (32 X 3400 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 0.87, 1.14, 1.31
-----------------------------------------------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
-----------------------------------------------------------------------------------------------------------------------------------------------
BM_Impl, uint32_t>/524288_mean 78.3 us 78.3 us 100000 Latency=149.272ps Throughput=6.23925Gi/s
BM_Impl, uint32_t>/524288_median 78.2 us 78.2 us 100000 Latency=149.174ps Throughput=6.24321Gi/s
BM_Impl, uint32_t>/524288_stddev 0.430 us 0.398 us 100000 Latency=759.527fs Throughput=30.7237Mi/s
BM_Impl, uint32_t>/524288_cv 0.55 % 0.51 % 100000 Latency=0.51% Throughput=0.48%
BM_Impl, uint64_t>/524288_mean 39.7 us 39.7 us 100000 Latency=75.6285ps Throughput=12.3154Gi/s
BM_Impl, uint64_t>/524288_median 39.6 us 39.6 us 100000 Latency=75.6073ps Throughput=12.3179Gi/s
BM_Impl, uint64_t>/524288_stddev 0.417 us 0.397 us 100000 Latency=757.501fs Throughput=102.284Mi/s
BM_Impl, uint64_t>/524288_cv 1.05 % 1.00 % 100000 Latency=1.00% Throughput=0.81%
Comparing bench/librawspeed/adt/VariableLengthLoadBenchmark-old to bench/librawspeed/adt/VariableLengthLoadBenchmark
Benchmark Time CPU Time Old Time New CPU Old CPU New
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BM_Impl, uint32_t>/524288_pvalue 0.0000 0.0000 U Test, Repetitions: 100000 vs 100000
BM_Impl, uint32_t>/524288_mean -0.3233 -0.3233 116 78 116 78
BM_Impl, uint32_t>/524288_median -0.3260 -0.3260 116 78 116 78
BM_Impl, uint32_t>/524288_stddev -0.9014 -0.9084 4 0 4 0
BM_Impl, uint32_t>/524288_cv -0.8543 -0.8646 0 0 0 0
BM_Impl, uint64_t>/524288_pvalue 0.0000 0.0000 U Test, Repetitions: 100000 vs 100000
BM_Impl, uint64_t>/524288_mean +0.0177 +0.0177 39 40 39 40
BM_Impl, uint64_t>/524288_median +0.0182 +0.0182 39 40 39 40
BM_Impl, uint64_t>/524288_stddev -0.0726 -0.0294 0 0 0 0
BM_Impl, uint64_t>/524288_cv -0.0887 -0.0463 0 0 0 0
OVERALL_GEOMEAN -0.1698 -0.1697 0 0 0 0

@bronger After the generation of calibration requests on github no longer worked, unfortunately now the zip files are no longer stored on the server (see 3d76bc, e91e0d, bb2729, 776abc or 7ea4a0 on wilson) Is there a way to repair this?

@bronger I guess I didn't wait long enough, some of the zip files are now available.

Add alternative, variableLengthLoadNaiveViaStdCopy

Basically identical to the original variableLengthLoadNaiveViaMemcpy(), but without any UB, and at least currently, if we are known-inbounds, it does get optimized to a simple load.

Add alternative, simpler variableLengthLoadNaiveViaConditionalLoad implementation

Also benchmark the perf of the actual case

And that's the actual workloop in BitStreamForwardSequentialReplenisher, that is what we should optimize.

Implement proper (best?) variableLengthLoad

If we can make the load unconditional (after clamping the position), then we can later fix-up the loaded value, by bit-shifting it.

Though the resulting assembly is rather bad for >8 byte loads.

Also benchmark the perf of the best-case path

While it is rather interesting to know how fast the whatever implementation is on itself, practically always we deal with the fully-in-bounds case, so knowing what's the theoretical maximum perf is good.

Extract variableLengthLoadNaiveViaMemcpy() out of BitStreamForwardSequentialReplenisher::getInput()

It is not obvious if this snippet is the best solution to the problem, but even if it is, there's hidden UB there: Base::data + Base::pos may be creating OOB pointer, even if the size (bytesRemaining) will be zero then.

Merge pull request #596 from LebedevRI/opensuse

provo-mirror.opensuse.org is broken, is there a better one?

provo-mirror.opensuse.org is broken, is there a better one?

https://bugzilla.opensuse.org/show_bug.cgi?id=1202041

build-Clang17-release$ bench/librawspeed/adt/VariableLengthLoadBenchmark --benchmark_filter="uint[3][2]_t" --benchmark_repetitions=1000 --benchmark_min_time=1x --benchmark_display_aggregates_only=true
2024-01-07T03:11:09+03:00
Running bench/librawspeed/adt/VariableLengthLoadBenchmark
Run on (32 X 3400 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 1.45, 1.13, 1.00
--------------------------------------------------------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
--------------------------------------------------------------------------------------------------------------------------------------------------------
BM_Impl/524288_mean 58.0 us 58.0 us 1000 Latency=121.66ps Throughput=8.41783Gi/s
BM_Impl/524288_median 58.0 us 58.0 us 1000 Latency=121.53ps Throughput=8.42591Gi/s
BM_Impl/524288_stddev 0.732 us 0.652 us 1000 Latency=1.36709ps Throughput=83.6357Mi/s
BM_Impl/524288_cv 1.26 % 1.12 % 1000 Latency=1.12% Throughput=0.97%
BM_Impl/524288_mean 116 us 116 us 1000 Latency=243.646ps Throughput=4.20289Gi/s
BM_Impl/524288_median 116 us 116 us 1000 Latency=243.479ps Throughput=4.2057Gi/s
BM_Impl/524288_stddev 0.486 us 0.484 us 1000 Latency=1.01467ps Throughput=17.4973Mi/s
BM_Impl/524288_cv 0.42 % 0.42 % 1000 Latency=0.42% Throughput=0.41%
BM_Impl/524288_mean 168 us 168 us 1000 Latency=351.597ps Throughput=2.91248Gi/s
BM_Impl/524288_median 168 us 168 us 1000 Latency=351.294ps Throughput=2.91494Gi/s
BM_Impl/524288_stddev 0.834 us 0.731 us 1000 Latency=1.5323ps Throughput=12.5236Mi/s
BM_Impl/524288_cv 0.50 % 0.44 % 1000 Latency=0.44% Throughput=0.42%
BM_Impl/524288_mean 540 us 540 us 1000 Latency=1.10682ns Throughput=925.179Mi/s
BM_Impl/524288_median 540 us 540 us 1000 Latency=1.10606ns Throughput=925.806Mi/s
BM_Impl/524288_stddev 1.24 us 1.11 us 1000 Latency=2.31991ps Throughput=1.86664Mi/s
BM_Impl/524288_cv 0.23 % 0.20 % 1000 Latency=0.20% Throughput=0.20%
BM_Impl/524288_mean 540 us 540 us 1000 Latency=1.10669ns Throughput=925.28Mi/s
BM_Impl/524288_median 540 us 540 us 1000 Latency=1.10608ns Throughput=925.789Mi/s
BM_Impl/524288_stddev 0.928 us 0.790 us 1000 Latency=1.65743ps Throughput=1.34762Mi/s
BM_Impl/524288_cv 0.17 % 0.15 % 1000 Latency=0.15% Throughput=0.15%
BM_Impl, uint32_t>/524288_mean 58.0 us 58.0 us 1000 Latency=121.602ps Throughput=8.42126Gi/s
BM_Impl, uint32_t>/524288_median 58.0 us 57.9 us 1000 Latency=121.509ps Throughput=8.42736Gi/s
BM_Impl, uint32_t>/524288_stddev 0.368 us 0.367 us 1000 Latency=787.756fs Throughput=51.6112Mi/s
BM_Impl, uint32_t>/524288_cv 0.63 % 0.63 % 1000 Latency=0.63% Throughput=0.60%
BM_Impl, uint32_t>/524288_mean 77.5 us 77.5 us 1000 Latency=162.545ps Throughput=6.29993Gi/s
BM_Impl, uint32_t>/524288_median 77.5 us 77.4 us 1000 Latency=162.403ps Throughput=6.30528Gi/s
BM_Impl, uint32_t>/524288_stddev 0.378 us 0.378 us 1000 Latency=811.311fs Throughput=30.3316Mi/s
BM_Impl, uint32_t>/524288_cv 0.49 % 0.49 % 1000 Latency=0.49% Throughput=0.47%
BM_Impl, uint32_t>/524288_mean 77.7 us 77.7 us 1000 Latency=162.858ps Throughput=6.28787Gi/s
BM_Impl, uint32_t>/524288_median 77.6 us 77.6 us 1000 Latency=162.739ps Throughput=6.29228Gi/s
BM_Impl, uint32_t>/524288_stddev 0.517 us 0.446 us 1000 Latency=958.27fs Throughput=35.4747Mi/s
BM_Impl, uint32_t>/524288_cv 0.67 % 0.57 % 1000 Latency=0.57% Throughput=0.55%
BM_Impl, uint32_t>/524288_mean 77.9 us 77.9 us 1000 Latency=163.389ps Throughput=6.2674Gi/s
BM_Impl, uint32_t>/524288_median 77.8 us 77.8 us 1000 Latency=163.221ps Throughput=6.27369Gi/s
BM_Impl, uint32_t>/524288_stddev 0.380 us 0.379 us 1000 Latency=813.353fs Throughput=30.1957Mi/s
BM_Impl, uint32_t>/524288_cv 0.49 % 0.49 % 1000 Latency=0.49% Throughput=0.47%
BM_Impl, uint32_t>/524288_mean 78.2 us 78.1 us 1000 Latency=163.884ps Throughput=6.24847Gi/s
BM_Impl, uint32_t>/524288_median 78.1 us 78.1 us 1000 Latency=163.788ps Throughput=6.252Gi/s
BM_Impl, uint32_t>/524288_stddev 0.367 us 0.367 us 1000 Latency=787.45fs Throughput=28.9555Mi/s
BM_Impl, uint32_t>/524288_cv 0.47 % 0.47 % 1000 Latency=0.47% Throughput=0.45%

build-Clang17-release$ bench/librawspeed/adt/VariableLengthLoadBenchmark --benchmark_filter="uint[6][4]_t" --benchmark_repetitions=1000 --benchmark_min_time=1x --benchmark_display_aggregates_only=true
2024-01-07T03:11:22+03:00
Running bench/librawspeed/adt/VariableLengthLoadBenchmark
Run on (32 X 3400 MHz CPU s)
CPU Caches:
 L1 Data 32 KiB (x16)
 L1 Instruction 32 KiB (x16)
 L2 Unified 512 KiB (x16)
 L3 Unified 32768 KiB (x2)
Load Average: 1.42, 1.14, 1.00
--------------------------------------------------------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
--------------------------------------------------------------------------------------------------------------------------------------------------------
BM_Impl/524288_mean 40.2 us 40.2 us 1000 Latency=84.229ps Throughput=12.1596Gi/s
BM_Impl/524288_median 40.2 us 40.1 us 1000 Latency=84.1797ps Throughput=12.1645Gi/s
BM_Impl/524288_stddev 0.639 us 0.602 us 1000 Latency=1.26238ps Throughput=157.635Mi/s
BM_Impl/524288_cv 1.59 % 1.50 % 1000 Latency=1.50% Throughput=1.27%
BM_Impl/524288_mean 58.4 us 58.4 us 1000 Latency=122.476ps Throughput=8.36111Gi/s
BM_Impl/524288_median 58.4 us 58.4 us 1000 Latency=122.39ps Throughput=8.36671Gi/s
BM_Impl/524288_stddev 0.360 us 0.358 us 1000 Latency=769.389fs Throughput=49.8219Mi/s
BM_Impl/524288_cv 0.62 % 0.61 % 1000 Latency=0.61% Throughput=0.58%
BM_Impl/524288_mean 137 us 137 us 1000 Latency=286.67ps Throughput=3.57213Gi/s
BM_Impl/524288_median 137 us 137 us 1000 Latency=286.607ps Throughput=3.57283Gi/s
BM_Impl/524288_stddev 0.650 us 0.649 us 1000 Latency=1.36204ps Throughput=16.3394Mi/s
BM_Impl/524288_cv 0.48 % 0.48 % 1000 Latency=0.48% Throughput=0.45%
BM_Impl/524288_mean 251 us 251 us 1000 Latency=526.61ps Throughput=1.94453Gi/s
BM_Impl/524288_median 251 us 251 us 1000 Latency=526.28ps Throughput=1.94573Gi/s
BM_Impl/524288_stddev 0.756 us 0.677 us 1000 Latency=1.41881ps Throughput=5.28031Mi/s
BM_Impl/524288_cv 0.30 % 0.27 % 1000 Latency=0.27% Throughput=0.27%
BM_Impl/524288_mean 251 us 251 us 1000 Latency=526.693ps Throughput=1.94422Gi/s
BM_Impl/524288_median 251 us 251 us 1000 Latency=526.364ps Throughput=1.94542Gi/s
BM_Impl/524288_stddev 0.845 us 0.700 us 1000 Latency=1.46732ps Throughput=5.45387Mi/s
BM_Impl/524288_cv 0.34 % 0.28 % 1000 Latency=0.28% Throughput=0.27%
BM_Impl, uint64_t>/524288_mean 40.2 us 40.2 us 1000 Latency=84.2051ps Throughput=12.1615Gi/s
BM_Impl, uint64_t>/524288_median 40.2 us 40.2 us 1000 Latency=84.2426ps Throughput=12.1554Gi/s
BM_Impl, uint64_t>/524288_stddev 0.310 us 0.309 us 1000 Latency=663.128fs Throughput=90.6958Mi/s
BM_Impl, uint64_t>/524288_cv 0.77 % 0.77 % 1000 Latency=0.77% Throughput=0.73%
BM_Impl, uint64_t>/524288_mean 58.1 us 58.1 us 1000 Latency=121.842ps Throughput=8.41092Gi/s
BM_Impl, uint64_t>/524288_median 58.2 us 58.1 us 1000 Latency=121.949ps Throughput=8.39693Gi/s
BM_Impl, uint64_t>/524288_stddev 1.35 us 1.35 us 1000 Latency=2.84149ps Throughput=289.884Mi/s
BM_Impl, uint64_t>/524288_cv 2.33 % 2.33 % 1000 Latency=2.33% Throughput=3.37%
BM_Impl, uint64_t>/524288_mean 39.3 us 39.3 us 1000 Latency=82.388ps Throughput=12.4299Gi/s
BM_Impl, uint64_t>/524288_median 39.2 us 39.2 us 1000 Latency=82.2503ps Throughput=12.4498Gi/s
BM_Impl, uint64_t>/524288_stddev 0.357 us 0.356 us 1000 Latency=763.755fs Throughput=105.899Mi/s
BM_Impl, uint64_t>/524288_cv 0.91 % 0.91 % 1000 Latency=0.91% Throughput=0.83%
BM_Impl, uint64_t>/524288_mean 39.3 us 39.3 us 1000 Latency=82.3689ps Throughput=12.4334Gi/s
BM_Impl, uint64_t>/524288_median 39.2 us 39.2 us 1000 Latency=82.3132ps Throughput=12.4403Gi/s
BM_Impl, uint64_t>/524288_stddev 0.486 us 0.484 us 1000 Latency=1.01573ps Throughput=125.609Mi/s
BM_Impl, uint64_t>/524288_cv 1.24 % 1.23 % 1000 Latency=1.23% Throughput=0.99%
BM_Impl, uint64_t>/524288_mean 39.7 us 39.7 us 1000 Latency=83.2796ps Throughput=12.2968Gi/s
BM_Impl, uint64_t>/524288_median 39.7 us 39.7 us 1000 Latency=83.215ps Throughput=12.3055Gi/s
BM_Impl, uint64_t>/524288_stddev 0.434 us 0.346 us 1000 Latency=742.698fs Throughput=101.825Mi/s
BM_Impl, uint64_t>/524288_cv 1.09 % 0.87 % 1000 Latency=0.87% Throughput=0.81%

Codecov Report

Attention: 36 lines in your changes are missing coverage. Please review.

Comparison is base (c2b2976) 58.90% compared to head (1286399) 59.11%.

Files Patch % Lines
src/librawspeed/adt/VariableLengthLoad.h 67.12% 24 Missing :warning:
test/librawspeed/adt/VariableLengthLoadTest.cpp 80.48% 8 Missing :warning:
src/librawspeed/io/BitStream.h 25.00% 3 Missing :warning:
...ch/librawspeed/adt/VariableLengthLoadBenchmark.cpp 97.95% 1 Missing :warning:
@@ Coverage Diff @@
## develop #595 +/- ##
===========================================
+ Coverage 58.90% 59.11% +0.20% 
===========================================
 Files 247 250 +3 
 Lines 14609 14770 +161 
 Branches 1991 2000 +9 
===========================================
+ Hits 8606 8731 +125 
- Misses 5882 5918 +36 
 Partials 121 121 
Flag Coverage Δ
benchmarks 10.23% <56.88%> (+0.51%) :arrow_up:
integration 46.41% <5.47%> (-0.48%) :arrow_down:
linux 56.77% <66.88%> (+0.09%) :arrow_up:
macOS 20.69% <73.83%> (+0.42%) :arrow_up:
rpu_u 46.41% <5.47%> (-0.48%) :arrow_down:
unittests 18.40% <49.70%> (+0.34%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

I am searching for profiles for Sigma 24-70 mm f/2.8 DG OS HSM Canon Art Canon EF 70-200mm f/2.8 L IS lll USM If necessary I can contribute pictures.

For this, we need correctly taken pictures. How to take these pictures is described here (section "Taking pictures") and partly also here and here. The images can then be uploaded from this website. Please store the images separately for each lens in folders (distortion, tca, vignetting) and provide the file names of the images with the focal length and aperture. The calibration can then be carried out by the members of this project. The other option is to do the calibration yourself and make the data and some raw pictures for testing available to this project.

Here you will find some hints: calibration with lens_calibrate.py and taking pictures calibration and taking pictures video 1 calibration with Hugin video 2 calibration with Hugin lensfuns coverage lensfuns database calibration with calibrate.py

Describe the bug

When writing any Xmp metadata to a jp2 file, the resulting file has all xmp metadata erased.

To Reproduce

Steps to reproduce the behavior:

exiv2 -px myfile.jp2

Xmp.xmp.CreateDate XmpText 19 2011-05-12T12:09:15
Xmp.xmp.CreatorTool XmpText 23 Capture One 9.3 Windows
Xmp.aux.SerialNumber XmpText 8 EJ031825
Xmp.aux.ImageNumber XmpText 6 221317
Xmp.aux.Firmware XmpText 96 P65+-M, Firmware: Main=5.2.2, Boot=2.3, FPGA=1.2.4, CPLD=5.0.1, PAVR=1.0.3, UIFC=1.0.1, TGEN=1.0
Xmp.photoshop.DateCreated XmpText 19 2011-05-12T12:09:15
Xmp.photoshop.Credit XmpText 14 Jenna Courtade
Xmp.photoshop.LegacyIPTCDigest XmpText 32 549E5EDC2A6B82049187A91955643B63
Xmp.iptc.CreatorContactInfo XmpText 0 type="Struct"
Xmp.iptc.CreatorContactInfo/Iptc4xmpCore:CiEmailWork XmpText 41 digitizationservices@library.illinois.edu
Xmp.iptc.CreatorContactInfo/Iptc4xmpCore:CiTelWork XmpText 14 +1(217)2442062
Xmp.iptc.CreatorContactInfo/Iptc4xmpCore:CiAdrPcode XmpText 5 61801
Xmp.iptc.CreatorContactInfo/Iptc4xmpCore:CiUrlWork XmpText 64 https://www.library.illinois.edu/preservation/digitization-servi
Xmp.iptc.CreatorContactInfo/Iptc4xmpCore:CiAdrExtadr XmpText 21 1408 W. Gregory Drive
Xmp.iptc.CreatorContactInfo/Iptc4xmpCore:CiAdrCity XmpText 6 Urbana
Xmp.iptc.CreatorContactInfo/Iptc4xmpCore:CiAdrCtry XmpText 13 United States
Xmp.iptc.CreatorContactInfo/Iptc4xmpCore:CiAdrRegion XmpText 8 Illinois
Xmp.dc.title LangAlt 1 lang="x-default" SLC 
Xmp.dc.creator XmpSeq 2 University of Illinois Library, University of Illinois Library

exiv2 --Modify "set Xmp.dc.creator library" myfile.jp2

exiv2 -px myfile.jp2 

The console shows nothing.

I get the same result when I try with the C++ api as well.

This is not a problem for tiff files nor is a problem with Exif metadata. I can't speak to other file formats because I only work with jp2 and tiff files.

Desktop (please complete the following information):

  • OS and version: macOS 13.6
  • Exiv2 version and source: 0.28.1 from homebrew and from source
  • Compiler and version: Apple clang version 15.0.0
  • Compilation mode and/or compiler flags: cmake defaults

The last version that I can confirm this working is v0.27.6.

Can anybody else confirm this?

Think I might have found why. Can you please test https://github.com/Exiv2/exiv2/pull/2890?

@kmilos this looks like it fixes it.

After updating to 0.28 I got a lot of errors with image processing on jp2 files and I thought I was taking crazy pills.

Nice job on getting this fixed.

Thanks for catching this and testing!

Thanks for catching this and testing!

@kmilos Of course! Exiv2 is one the greatest tool/friend to digital archivists. It's faster and more robust than any other alternative out there.

Any idea when 0.28.2 will be released? You fixed a pretty bad bug that corrupted files. I would think that it would be fast-tracked for release so it doesn't affect other people's jp2 files.

Any idea when 0.28.2 will be released? You fixed a pretty bad bug that corrupted files. I would think that it would be fast-tracked for release so it doesn't affect other people's jp2 files.

@Exiv2/exiv2 https://github.com/Exiv2/exiv2/milestones

In the meantime, you can ask your favorite distro exiv2 package maintainer to include the relevant patch.

The information was right there in front of me the whole time. How embarrassing...

I actually build exiv2 from source so I can patch it myself. Good idea. Thanks!

I actually build exiv2 from source so I can patch it myself.

The fix is already committed to the 0.28.x branch as well.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

Feature request: User-defined module groups #315

Closed

rprego opened this issue Jan 5, 2024 · 1 comment

Closed

Feature request: User-defined module groups

315

rprego opened this issue Jan 5, 2024 · 1 comment

Assignees

Labels

question

wontfix

Comments

Copy link

rprego

commented

Jan 5, 2024

The ability to create a user-defined module group in Darktable was something I found very useful there. It seems like that feature was removed in Ansel. Is there a possibility for continuing to include the ability to have a user-defined module list for modules that someone uses in processing most of their images?

The text was updated successfully, but these errors were encountered:

rprego

assigned aurelienpierre

Jan 5, 2024

rprego

changed the title User-defined module groups

Feature request: User-defined module groups

Jan 5, 2024

Jiyone

added

question

wontfix

labels

Jan 5, 2024

Copy link

Collaborator

aurelienpierre

commented

Jan 6, 2024

No, modules groups are organized workflow-wise and pipeline-wise to guide the workflow into the 86 modules we have. What Darktable does is only giving users a wrong idea that workflow is a personal choice that consists in arranging GUI widgets in a pretty way, while it's arranging layers in an ordered vertical stack where order matters.

aurelienpierre

closed this as completed

Jan 6, 2024

Sign up for free to join this conversation on GitHub . Already have an account? Sign in to comment

Labels

question

wontfix

3 participants

You can’t perform that action at this time.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons .

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here .

Could not load branches

Nothing to show

{{ refName }} default

Could not load tags

Nothing to show

{{ refName }} default

...

Could not load branches

Nothing to show

{{ refName }} default

Could not load tags

Nothing to show

{{ refName }} default

1

commit

1

file changed

1

contributor

Commits on Jan 5, 2024

Fixed error and improved some strings

mpaglia0

authored and aurelienpierre committed Jan 5, 2024

Copy the full SHA

e2c4a0a

View commit details

Browse the repository at this point in the history

This comparison is taking too long to generate.

Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.

You can try running this command locally to see the comparison on your machine:

You can’t perform that action at this time.

This PR provides vignetting correction for - Sigma 18-250mm f/3.5-6.3 DC OS Macro HSM (at 18mm, 32mm, 52mm, 130mm, 250mm) - Sigma 24mm F1.4 DG HSM | A Values were determined using a Nikon D7500, thus valid for a crop factor of 1.5.

I have the impression, that the correction parameters provided herein are too aggressive, since quite a large area of the picture was influenced by the correction. Perhaps this is right, but I'm not sure. Therefore I would like to review the current vignetting parameters by redoing the photos with a different diffuser.

@Macchiato17 An evenly lit area is also important. When artificial light is used on a wall surface, the edges of the image are often slightly weaker than the center of the image. I would like to test your profiles and add them to the db if the correction is good, but personally I only merge calibration data when I have been able to test it on some raw pictures. Unfortunately, I could not find any raw images for your lenses. So it would be nice if you could upload some pictures here. 2-3 normal shots (even surfaces with little structure with the aperture open are good e.g. plain, smooth walls or a blue sky) and 2-3 shots you used for calibration would be enough.

I really appreciate your kind offer to check the correction data on your side. I uploaded the RAW files that I used for creating the vignetting correction parameter set. I also added some "normal" shots I took with both lenses today. If you need other pictures, just give me a ping. Regarding vignetting correction, I mainly followed the guide on pixls.us again. All photos were taken outside, against a grayish, featureless sky, using a diffuser directly mounted in front of the lens. The camera was mounted on a tripod. Focus set to infinity.

Many thanks! The profile for the 24mm lens is ok. It may have only a minimal overcorrection. Unfortunately, the images for the zoom lens are missing on the server. Two uploads have been created for you, but without a zip file with images. The same thing happened to someone else. Maybe the server has a problem. Please try once more. If it does not work, I`ll send you another cloud address for upload.

OK, I tried again, please check if the two ZIPs were uploaded 😃

Thanks for merging 😄

Added vignetting correction for two lenses: - Sigma 18-250mm f/3.5-6.3 DC OS Macro HSM (at 18mm, 32mm, 52mm, 130mm, 250mm) - Sigma 24mm F1.4 DG HSM | A Values were determined using a Nikon D7500, thus valid for a crop factor of 1.5.

I think we should try to separate the C and C++ implementations because we are using a vector but then "reserves" a new element and assigns it to NULL, I haven't seen this implementation before, usually when the vector size is known, we can call reserve to do the reallocation if needed. To me it feels like a C-hack with C++ code.

With this implementation the code is fast but very coupled with the C interface and we can't change the data types of the implementation before we update the C-interface. But if we want to decouple the C-interface with the C++ implementation the owner of the memory needs to be addressed. In the current way the caller is not the owner of the memory, and doesn't need to free it, which is good!

const lfMount * const *lf_db_get_mounts (lfDatabase *db)
{
 return db->GetMounts ();
}

const lfMount * const *lfDatabase::GetMounts ()
{
 size_t size = Mounts.size();
 Mounts.reserve(size + 1);
 Mounts.data()[size] = NULL;
 return Mounts.data();
}

I would propose to update the C-interface with something like:

union ret_structs {
 lfMount_struct
}

enum ret_type {
 lfMount
};

struct lfMount_struct {
 int sz;
 enum ret_type;
 lfMount** items;
}

lf_free(ret_structs* lfMount_struct) {
/* free up memory */
}

lf_db_get_mounts (lfDatabase* db, struct, lfMount_struct** ret) {
/* copies data from the shared pointer vector to a C data struct */
}
/* or maybe use a unique_ptr instead */
std::vector> lfDatabase::GetMounts () const
{
 return mounts;
}

The downside is that it adds more complexity to the C-interface, now the caller is the owner of the memory and needs to free it when it's no longer needed, but I can't see any other way around it.

What do you think?

I'd say that the code with reserve is not just a C hack, but it can be even dangerous. It does not change vector.size(), it doesn't handle those extra objects (doesn't copy, move or reallocate them). And sunce it doesn't changevector.size() all the consequent calls will overwrite 'the last' element. So

...In the current way the caller is not the owner of the memory, and doesn't need to free it...

is not quite good, nobody is owner of the memory and it's a potential memory leak point.

Also I'd say that overall code is very C-centric and since cmake has C++14 requirement, can it be modernized? I think I have a spare time and can make few pull requests related to the C++ updates.

While I agree that this is a little hacky, here's some background why it was done that way. Very long time ago lensfun used it's own vector type that was a null-terminated C-array. To modernise at least the C++ code I decided to replace the custom vector type with std::vector. The C-interface in lensfun was always just a thin layer around the C++ code. To keep the C-API backwards-compatible I came up with the approach to null-terminate the std::vector by reserving an additional element. This might look strange but IMO is not as bad as @CAHEK7 wrote:

  • A vector is guaranteed to maintain the size it was reserved and a vector is always contiguous memory. So accessing size+1 is safe.
  • The vectors cannot be changed by any lensfun callers, so we only internally have to assure any vector changes adapt the null-termination. An outside caller cannot break this with the existing API as far as I can see.
  • The C-interface only returns a pointer structure which itself is const. The referenced memory can't leak because it is managed by the internal vector. The vector cannot be changed by the caller as well. So IMO it is clear that the memory is owned by the lensfun library and the calller would not have to free the structure.

That's just to provide a bit of background and reasoning for this little weird structure. Feel free to make any changes but remember that further breaking the C-API might not increase the acceptance and stability of the project which already suffered due to the overall lack of developer ressources. If you have spare time looking at the issues in the Stable release milstone could be a good starting point. https://github.com/lensfun/lensfun/milestone/1

I guess it's better to use explicit _lf_terminate_vec instead of the code in lfDatabase::GetMounts.

Thanks for the background @seebk, it was good to get rid of the C-arrays, the std::vector and reservering the element +1 and adding null was a quite neat trick :)

I agree with you that we shouldn't continue to break the C-API. The code is very efficient when we expose the raw data structures via C-APIs but it's also making it hard to refactor the code when they are tightly coupled.

I'll take a look at the milestones.

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage No data about Duplication

See analysis details on SonarCloud

Thanks

I think it would be nice to separate the code base from calibration data and make the calibration data to be a git submodule in the lensfun repo instead. What do you say about that proposal?

There is already a (old) discussion about something similar: https://github.com/lensfun/lensfun/issues/1172

A separate repo seems like a god idea. However I'm not sure what's the best way to do this.

  • So, it would be necessary to update the submodule, i suppose before every release?
  • How do we handle the different database versions? Convert when necessary or a different branch or something else?

Just a submodule would have the advantage of separate issues, commits and pull requests with the downside of additional git commands for cloning lensfun and updating the submodule.

For users it would probably best to have a lensfun and a lensfun-data package with more regular releases, because many don't know about lensfun-update-data.

I don't know whats best for packaging lensfun.

Okay, thanks for the update @jonas-ott, maybe it's nice to have in theory but it's harder to implement in practice. I'll close this issue.

Description of the bug

The export result appears considerably darker than the image left in the central view of the darkroom.

To Reproduce

  1. Open Ansel
  2. Export a picture A
  3. Open picture A in darkroom
  4. Press CTRL+W
  5. Compare with exported picture A

Expected behavior

An exported image equivalent to that seen in the darkroom.

Context

Ansel commit 8801a4d, built on ubuntu 22.04

Export parameters export_parms


Exported picture export


Ansel darktoom's preview central_view


Is your image viewer color managed? I have something similar when using the default viewer on windows.

pedrorrodriguez The issue disappears when the tone equalizer module is deactivated.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

masks: hiding the mask view randomly causes a black or disturbed central view. #312

Open

blonchkman opened this issue Jan 4, 2024 · 2 comments

Open

masks: hiding the mask view randomly causes a black or disturbed central view.

312

blonchkman opened this issue Jan 4, 2024 · 2 comments

Comments

Copy link

blonchkman

commented

Jan 4, 2024

Description of the bug

Hiding the mask view after a movement of the detail threshold randomly causes a black or disturbed central view. This behavior happens to me one or two times out of 10 when I remove the mask view after moving the detail slider.

To Reproduce

Preparation:

create a new Color Calibration instance

create a parametric mask on this instance

Reproduction:

Activate mask view (single-click on button)

Change detail threshold, left or right

Deactivate mask view (single-click on button)

Repeat step 1 until perturbation at step 3.

Expected behavior

Return to current central view.

Context

Screenshots

System

Ansel, commit 8801a4d Ubuntu linux 22.04

Additional context

The random erratic behavior of the central view is not linked to the detail threshold.

Another person acting on the G and Jz parameter sliders confirms a disturbed central view after 20 attempts, but no black occurrences.

I've noticed one disturbed central view on 50 tests and 1 or 2 black occurrences on 10 tests.

I still have to check whether color calibration is the only module affected.

The text was updated successfully, but these errors were encountered:

Copy link

Collaborator

aurelienpierre

commented

Jan 4, 2024

Details masks are a terrible hack meant to alleviate the detrimental side-effects of sharpening algorithms by allowing to exclude "edges" :

they are computed in the demosaicing module at full resolution (in any case, at higher resolution than preview buffer),

they need to pass through several geometric distortions, inducing errors,

they need to be downsampled for the module using them, inducing errors,

their darkroom preview is heavily zoom-dependent, to the point where their effect might be completely invisible when zoomed out.

It's a brittle design that can't be fixed, to solve a problem that should be fixed within sharpening algorithms (diffuse & sharpen uses a variance-based edge detection to regularize the effect near sharp edges).

The details mask will be deprecated.

Copy link

rprego

commented

Jan 5, 2024

On 4d185b4 , I see similar issues with masks not disabling, for example when using the tone equalizer or highlights reconstruction. Clicking the display exposure mask button, then disabling it, leaves the mask displayed until the preview is zoomed in/out and redisplayed.

Sign up for free to join this conversation on GitHub . Already have an account? Sign in to comment

Labels

None yet

3 participants

You can’t perform that action at this time.

The "Retouch" module does not display or apply corrections.

image image

The problem is reproduced starting from Ansel-ba4dc39-x86_64.AppImage by the current build.

The problem is relevant the first time shape is added to the module. When you reopen the image or add shapes, the problem goes away.

The console displays:

[_dev_add_history_item_ext] invalidating history
[_dev_add_history_item_ext] invalidating history
[_dev_add_history_item_ext] invalidating history
[_dev_add_history_item_ext] invalidating history
rt_process_forms: missing form=1704386040 from array
rt_process_forms: missing form=1704386040 from array

See src/iop/retouch.c:

 const int index = rt_get_index_from_formid(p, formid);
 if(index == -1)
 {
 // FIXME: we get this error when user go back in history, so forms are the same but the array has changed
 fprintf(stderr, "rt_process_forms: missing form=%i from array\n", formid);
 continue;
 }

Originally posted by @vtyrtov in https://github.com/aurelienpierreeng/ansel/issues/310#issuecomment-1877403103

Duplicate of #310

Merge pull request #2139 from mime01/sigma_mounts

Add further mounts to Sigma 100-400mm F5-6.3 DG OS HSM

Nightly builds: checkout 700 commits and tags

Solve version tagging issues

Description of the bug

The "Retouch" module does not display or apply corrections.

The circular shape clone tool fails to apply the correction. When attempting the same correction in Darktable, the results are successful. Importing the XMP file from Darktable into Ansel results in the correction being successfully applied.

image_2 image_1

"I updated this morning from Git. Issue resolved."

The problem is reproduced starting from Ansel-ba4dc39-x86_64.AppImage by the current build.

The problem is relevant the first time shape is added to the module. When you reopen the image or add shapes, the problem goes away.

The console displays:

[_dev_add_history_item_ext] invalidating history
[_dev_add_history_item_ext] invalidating history
[_dev_add_history_item_ext] invalidating history
[_dev_add_history_item_ext] invalidating history
rt_process_forms: missing form=1704386040 from array
rt_process_forms: missing form=1704386040 from array

See src/iop/retouch.c:

 const int index = rt_get_index_from_formid(p, formid);
 if(index == -1)
 {
 // FIXME: we get this error when user go back in history, so forms are the same but the array has changed
 fprintf(stderr, "rt_process_forms: missing form=%i from array\n", formid);
 continue;
 }

Im having this problem too. It seems that sometimes when you add a mask you can't alter its opacity either.

I'm having the same problem with build Ansel-0.0.0+729~ge2c4a0a-x86_64

Cf. Exiftool 12.72

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (b4f145e) 63.89% compared to head (ab79d66) 63.89%.

@@ Coverage Diff @@
## main #2881 +/- ##
=======================================
 Coverage 63.89% 63.89% 
=======================================
 Files 103 103 
 Lines 22381 22381 
 Branches 10872 10872 
=======================================
 Hits 14301 14301 
 Misses 5857 5857 
 Partials 2223 2223 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@mergify backport 0.28.x

backport 0.28.x

✅ Backports have been created

The Sigma 100-400mm F5-6.3 DG OS HSM | Contemporary is also available for the "Canon EF", "Sigma SA" and "Nikon F AF" mounts.

I can't find the Sony E-mount for this lens in the manufacturer's technical data either. Is there a confusion with the 100-400mm F5-6.3 DG DN OS | Contemporary lens? However, the lenses do not have the same optical structure.

May be a Sigma MC 11 adapter was used and a Sony mount connection was therefore specified. However, this is not the correct approach.

Add further mounts to Sigma 100-400mm F5-6.3 DG OS HSM

The Sigma 100-400mm F5-6.3 DG OS HSM | Contemporary is also available for the "Canon EF", "Sigma SA" and "Nikon F AF" mounts.

Wish that S5M2 raw photo can be supported.

Camera support is in the scope of the Rawspeed lib. Nothing can be done here.

Start using smart pointers instead of raw pointers

This is a manual backport of https://github.com/Exiv2/exiv2/pull/2860

Review these changes using an interactive CodeSee Map

Legend

⚠️ The sha of the head commit of this PR conflicts with #2880. Mergify cannot evaluate rules on this PR. ⚠️

Description of the bug

To Reproduce

Crop an image

Rotate the image in Horizon and perspective

Expected behavior

Image should remain consistent

Context

Screenshot from 2024-01-03 03-24-26 Screenshot from 2024-01-03 03-23-55 Screenshot from 2024-01-03 03-23-11

I have to disable the perspective module and restart ansel to get the correct image back.

Which commit introduced the error

?

System

  • ansel version : 0.0.0+727~g4d185b439 ( 4d185b43938cf7f81984706b0a8e50c57f901b04 )
  • OS : Linux 6.6.9-zen
  • Linux - Distro : Arch
  • Memory : 40 GB
  • Graphics card : gfx1032
  • Graphics driver :
  • OpenCL installed : rocm 6.0
  • OpenCL activated : yes, tested without too
  • Xorg : wayland
  • Desktop : gnome
  • GTK+ :
  • gcc : 13.2.1
  • cflags : https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=ansel-git
  • CMAKE_BUILD_TYPE : Release

Additional context

  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes
  • Do you use lua scripts?
  • What lua scripts start automatically? dtMediaWiki

I also reproduced this.

cflags : https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=ansel-git

When you are building AUR packages, cflags are actually set in /etc/makepkg.conf.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

po: Update zh_CN translation to adapt non-ASCII pruning #307

Merged

aurelienpierre merged 1 commit into

aurelienpierreeng :master

from

AlynxZhou :update-zh_CN-translation

Jan 2, 2024

Merged

po: Update zh_CN translation to adapt non-ASCII pruning

307

aurelienpierre merged 1 commit into

aurelienpierreeng :master

from

AlynxZhou :update-zh_CN-translation

Jan 2, 2024

Conversation

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Copy link

AlynxZhou

commented

Jan 2, 2024

Adapt 62a3800 .

AlynxZhou

force-pushed the

update-zh_CN-translation

branch from

to

Compare

January 2, 2024 08:00

AlynxZhou

force-pushed the

update-zh_CN-translation

branch from

to

Compare

January 2, 2024 08:21

Copy link

sonarcloud bot

commented

Jan 2, 2024

Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage 2.2% Duplication on New Code

See analysis details on SonarCloud

aurelienpierre

merged commit into

aurelienpierreeng :master

Jan 2, 2024

1 check passed

Sign up for free to join this conversation on GitHub . Already have an account? Sign in to comment

Labels

None yet

2 participants

Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.

You can’t perform that action at this time.

po: Update zh_CN translation to adapt non-ASCII pruning

Adapt 62a38008b42d42996dfcfb8d79541d50ea863786.

Is your feature request related to a problem?

No.

Describe the solution you would like

I want get EXIF from a JXR file.

Describe alternatives you have considered

Dump from exiv2:

.\exiv2.exe "Alan Wake 2 2024_1_1 17_07_43.jxr"
File name : Alan Wake 2 2024_1_1 17_07_43.jxr
File size : 14278162 Bytes
MIME type : image/tiff
Image size : 0 x 0
Thumbnail : None
Camera make :
Camera model :
Image timestamp :
File number :
Exposure time :
Aperture :
Exposure bias :
Flash :
Flash bias :
Focal length :
Subject distance:
ISO speed :
Exposure mode :
Metering mode :
Macro mode :
Image quality :
White balance :
Copyright :
Exif comment :

Dump from exiftool:

ExifTool Version Number : 12.72
File Name : Alan Wake 2 2024_1_1 17_07_43.jxr
Directory : 
File Size : 14 MB
File Modification Date/Time : 2024:01:01 17:07:44+08:00
File Access Date/Time : 2024:01:02 21:53:29+08:00
File Creation Date/Time : 2024:01:01 17:07:43+08:00
File Permissions : -rw-rw-rw-
File Type : JXR
File Type Extension : jxr
MIME Type : image/jxr
Exif Byte Order : Little-endian (Intel, II)
About : uuid:faf5bdd5-ba3d-11da-ad31-d33d75182f1b
Id : b4517361-573b-4479-ae81-cad4a6406eeb
Title : Alan Wake 2
Author : Microsoft Game DVR
Basic : 0!!!!f411a640-05da-47b1-9b30-7f69d5ee6948!!0!!1!!1337!!2007965160
Basic Hash : cf367a358dd9f4ae5df9684a98b196cefbb7c9d39ff0b64665495b9f9a47be62
Extended : 
Pixel Format : 64-bit RGBA Half
Transformation : Horizontal (normal)
Image Type : (none)
Image Width : 2560
Image Height : 1440
Width Resolution : 96
Height Resolution : 96
Image Offset : 1178
Image Byte Count : 14268658
Alpha Offset : 14269836
Alpha Byte Count : 8326
Image Size : 2560x1440
Megapixels : 3.7

Desktop

  • OS and version: Window 11
  • Exiv2 version and source: 0.28.1
  • Any software using exiv2 and source: just exiv2 command line tool

Additional context

Duplicate of https://github.com/Exiv2/exiv2/issues/2120

Please search before opening new issues.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 18

Star 563

Projects

Templates

Projects (classic)

2 Open

0 Closed

Recently updated

Newest

Oldest

Least recently updated

Name

Ansel development management

1 updated Jan 2, 2024

Management

1 updated Dec 13, 2023

R&Darktable tasks and roadmap

You can’t perform that action at this time.

Darkroom: do not fire the gui_post_expose callback if the module is not enabled.

Doesn't make sense, triggers weird corner cases.

Composition guides: general rework

  1. remove feature duplication in modules and user config: guides are managed at the darkroom level.
  2. fix guides in crop.c and ashift.c
  3. cleanup imageop guides pollution.

Building options: disable Lua by default

Too many weird bugs all around.

Ashift: make straightening mode and editing mode mutually exclusive

Straightening mode is the ported thing from clip.c module, where horizon line is drawn from holding right click on the image.

Editing is the native mode from this module, where we work on content lines.

Straightening mode is somewhat of a duplicate feature that collides with the native mode because both modes use the right click button_pressed event with a different meaning. Bypass straightening when full editing mode is on.

develop.c: add_history_item: remove checks for GUI

There is no reason to condition adding history items to the fact that some develop object is attached to a GUI, and anyway the undo record starts no matter what but ends only if GUI.

Bauhaus: don't set screen resolution

It's set at the window level already.

Is your feature request related to a problem?

exiv2 doesn't recognize the lens Sigma 24mm f/1.4 DG HSM Art when used with a crop sensor, i.e. Nikon D7500. I was reproducing that issue with darktable (built from current master) and also by directly launching exiv2 from a MINGW64 shell under Windows 11.

In the meantime, the crop related calibration data was added to the lensfun database (see lensfun issue 2126), the issue related to darktable not recognizing this lens and the content of the exiv2.ini is decribed in lensfun issue 2127.

Describe the solution you would like

Would be wonderful, if you may add recognition of this lens to exiv2.

Describe alternatives you have considered

As a work-around I created an exiv2.ini linking the lens ID with the lens name string found in the lensfun database

Desktop

  • Windows 11
  • Exiv2 version 0.27.7 installed via pacman
  • darktable built from current master

Additional context

I attached a JPEG file that was taken with this lens. Don't get confused, it's just a "black shot" 😉. And I wish all of you developing, maintaining or otherwise contributing to this nice piece of software a Happy New Year 😃

Thanks for the sample.

Thanks for including the lens so quickly - have a good time.

Description of the bug

When changing the value of TCA Overwrite, the main preview image is heavily distorted. This is fixed when zooming in or out.

To Reproduce

  1. Open image in Darkroom
  2. Enable Lens Correction
  3. Enable TCA Overwrite in Lens Correction
  4. Change value of TCA Red or TCA Blue
  5. See distorted main preview image.
  6. Zoom in or out of main image to see fix.

Expected behavior

TCA applied without the heavily distortion

Screenshots

Screenshot 2024-01-01 193151

Which commit introduced the error

f4d6756

System

Darktable version: f4d6756 OS : Windows 10 (build 19045) Memory : 24576mb Graphics card : AMD Radeon R9 200 OpenCL installed : Yes OpenCL activated : Yes

Bauhaus: use the max height of a single text line as a minimum for middle alignment of text labels

Set window parent of dialogs (import / export) to enable position setting (like preferences dialog) on macos

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

lensfun

/

lensfun

Public

Notifications

Fork 186

Star 584

lensfun lensfun Discussions

Latest activity

Date created

Top: Past day

Top: Past week

Top: Past month

Top: Past year

Top: All

Open

Closed

Locked

Unlocked

Answered

Unanswered

All

Categories, most helpful, and community links

Categories

Community links

Community guidelines

lensfun.github.io

Discussions

You must be logged in to vote

💬

How to install lensfun in GIMP 2.10.36 under Mint 21.2

LateJunction

started Jan 1, 2024 in General

1

You must be logged in to vote

🙏

Can you calibrate an adapted lens with a crop sensor?

Agilulfo

asked Sep 15, 2023 in Q&A

· Closed

· Answered

3

You must be logged in to vote

🙏

Updating lensfun On Linux Mint

MitchWoodin

asked Aug 29, 2023 in Q&A

· Unanswered

3

You must be logged in to vote

💡

Tamron 28-75mm Di III VXD G2?

hubbsterx

started Jun 6, 2023 in Ideas

2

You must be logged in to vote

🙏

LENS SONY 4-5.6/28-60

marc94510

asked Apr 16, 2023 in Q&A

· Unanswered

1

You must be logged in to vote

🙏

Lens and several mounts

bob2204

asked Mar 11, 2023 in Q&A

· Answered

2

💬

0.3.3 maintenance release

Sturmflut

started Feb 20, 2022 in General

4

You must be logged in to vote

💬

0.3.4 maintenance release

Sturmflut

started Dec 28, 2022 in General

· Closed

1

You must be logged in to vote

🙌

Axis/Ricom 3-10.5mm calibration file

JC3

started Oct 23, 2022 in Show and tell

2

You must be logged in to vote

🙏

Canon RF 24-70mm f/2.8L IS USM

nschaer

asked Oct 31, 2022 in Q&A

· Answered

1

You must be logged in to vote

💡

Overview for Database

rebio

started Feb 11, 2022 in Ideas

8

You must be logged in to vote

💬

Do we need focus distance in TCA (and distortion) correction?

uchrisu

started Feb 17, 2022 in General

2

You must be logged in to vote

🙏

How to run lunsfun-convert-lcp

aaeneas

asked Aug 25, 2022 in Q&A

· Unanswered

0

You must be logged in to vote

💬

Can anyone help with my new project?

martbetz

started Aug 19, 2022 in General

0

You must be logged in to vote

🙏

How to request support for a new lens?

stercomp

asked Apr 10, 2022 in Q&A

· Answered

6

You must be logged in to vote

🙏

camera lens EXIF model name

rebio

asked Feb 13, 2022 in Q&A

· Answered

3

You must be logged in to vote

💬

Ok, I'm lost... I'd like to use, one day even contribute, but can't get rolling...

elcycliste

started Apr 23, 2022 in General

5

You must be logged in to vote

💡

New structure for database

rebio

started Feb 11, 2022 in Ideas

12

You must be logged in to vote

💡

Make 'cropfactor' a property of calibration, not lens

gnl21

started Apr 3, 2022 in Ideas

2

You must be logged in to vote

💬

Tool that helps you doing the distortion calibration

uchrisu

started Feb 9, 2022 in General

3

You must be logged in to vote

💡

Using manufacturer lens correction data?

uchrisu

started Feb 7, 2022 in Ideas

15

You must be logged in to vote

🙏

Is this project still alive? What to do about open PRs? Especially those that add lens calibrations?

jonastr

asked Dec 9, 2021 in Q&A

· Answered

5

You must be logged in to vote

💬

Using Zeiss's diagrams in PDF spec sheets instead of measuring by hand?

qumuq-til

started Feb 12, 2022 in General

2

You must be logged in to vote

🙏

Pushing a branch

GustavHaapalahti

asked Feb 28, 2021 in Q&A

· Answered

2

You can’t perform that action at this time.

Add Sigma 24mm F1.4 DG HSM | A for crop 1.53

fixes #2126

Hello,

Wanted to translate exiv2 to Georgian. Found project page on crowdin, read instructions and emailed Leonardo for adding language. Language was added right away, but I was told to contact project mantainers to update .pot file on crowdin, because it's too old, so here I am.

BR, Temuri

P.S. There is no "download po file", nor upload in crowdin menu. Maybe there is something to enable for it?

compilation is running

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage No data about Duplication

See analysis details on SonarCloud

Thanks a lot !

Description of the bug

Latest git checkout fails to build with a translation error in it.po

[10/898] Building it locale FAILED: po/it/LC_MESSAGES/ansel.mo /home/jschrod/Downloads/Photo/ansel/build/po/it/LC_MESSAGES/ansel.mo cd /home/jschrod/Downloads/Photo/ansel/build/po && /usr/bin/msgfmt -v -c /home/jschrod/Downloads/Photo/ansel/po/it.po -o /home/jschrod/Downloads/Photo/ansel/build/po/it/LC_MESSAGES/ansel.mo /home/jschrod/Downloads/Photo/ansel/po/it.po:4315: 'msgid' and 'msgstr' entries do not both end with '\n'

To Reproduce

  1. Checkout latest git
  2. sh build.sh

Expected behavior

build should succeed

Which commit introduced the error

fe9d009

Fix with PR #305 To compile : edit po/it.po, add "\n" in the end of line 4317

po: Update zh_CN translation

Some missed corrections I found today.

Ashift: ensure crop is recomputed on horizon line drawing/rotation adjustment

Hope it's good

Thanks !

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage No data about Duplication

See analysis details on SonarCloud

We are sending this email because we've detected that you have a domain with an upcoming expiration date.

The following domain is expiring within 60 days:

exiv2.org -- Expires in 60 days If expired domains are not renewed, they will be deleted and made available for registration.

Click here to log in to your account

Note: Your registration information indicates that the Registrant Contact for a domain name mentioned in this message resides in a country with an applicable tax rate. When your domain name is renewed, you will be charged tax if you do not have an exemption. Please review our tax exemption policy here.

If you have any questions, please do not hesitate to contact us at support@pairdomains.com.

Thank you, pairdomains.com Customer Care support@pairdomains.com

You can disable expiration reminders for individual domain names by logging into your Pair Domains account, clicking on one of your domain names, and then clicking "Toggle Expiration Reminders".

  • pair International is the trade name for Ryousha Kokusai, LLC. Services to our international customers are provided exclusively by pair Networks, Inc., a World Class web host and domain registrar. All domain registration services will be fulfilled by Pair Domains, a division of pair Networks, Inc. pursuant to its ICANN status and agreements with other TLD Registries.

Add Canon RF 24mm F1.8 MACRO IS STM

upload ae7087

Perspective correction : massive GUI overhaul

  1. rename horizon and perpective (goal instead of half-goal/half-mean),
  2. remove useless pipeline flushing,
  3. implement GUI pipe cache bypass for settings,
  4. remove the monkey business on fake params committing to history and swapping for the sake of updating preview, work on a private GUI copy of params,
  5. implement the same Edit/Validate/Cancel logic as in crop module to allow going back,
  6. remove all monkey business with on-focus/out-of-focus GUI events. Rely on explicit editing mode.
  7. simplify the control flow for GUI events
  8. expose fitting options in GUI instead of having hidden key modifiers on button click events.
  9. re-arrange GUI controls.

Bauhaus: increase line height

Sligthly rework sizings based on line height

This adds support for extracting and storing some of the metadata that can be obtained from start of frame (SOF) tags in Jpeg files, namely: - The encoding process - The number of color components

Afaik only exiftool supports such "tags" see (SOF tags in https://exiftool.org/TagNames/JPEG.html#SOF). As they don't really belong to a metadata class like exif or iptc and are only specific of a specific format (jpeg) I added them as members of Exiv2:: JpegImage. Added a small sample and test to validate the implementation.

Motivation: Kodi currently implements its own jpeg parser and can thus obtain these two properties (https://github.com/xbmc/xbmc/blob/4532c7441510a9c9ebfa2023dd129ce13544890b/xbmc/pictures/PictureInfoTag.cpp#L416-L441 https://github.com/xbmc/xbmc/blob/master/xbmc/pictures/JpegParse.cpp#L73-L87). With my aim to rewrite the implementation based on exiv2 (https://github.com/xbmc/xbmc/pull/24109) they'll be removed if no alternative exists. So I decided to have a go at it and leave it to your consideration to collect feedback (and check if it's possible to have it upstream).

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (349c1f2) 63.89% compared to head (b9c167e) 63.90%.

Files Patch % Lines
src/jpgimage.cpp 75.00% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## main #2874 +/- ##
=======================================
 Coverage 63.89% 63.90% 
=======================================
 Files 103 104 +1 
 Lines 22381 22389 +8 
 Branches 10872 10876 +4 
=======================================
+ Hits 14301 14308 +7 
 Misses 5857 5857 
- Partials 2223 2224 +1 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@neheb thanks for merging. May I ask if you guys have any rough time estimate for a future release/tag of the library? Just to have an idea of when it's best to have it merged into upstream Kodi.

@mergifyio backport 0.28.x

best to do that if you want this in a release. current master will not be released in a long time.

backport 0.28.x

✅ Backports have been created

Thanks, I'll handle the backport tomorrow. I think this is the only required one. For all other contributions (mostly cmake options) we can carry the patches in the codebase since they only affect platforms for which we build statically anyway (and we already fulfilled our "submit upstream first" patch carrying principle). Cheers

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage No data about Duplication

See analysis details on SonarCloud

Thanks !

My Python wrapper for libexiv2 is released as "manylinux" binary wheels. These include copies of all libraries not guaranteed to be on any Linux system, so users can install a Python package without root access etc.

With EXIV2_ENABLE_WEBREADY disabled the bundled libraries are:

-rwxr-xr-x 1 jim users 135913 Dec 29 11:53 libbrotlicommon-6ce2a53c.so.1.0.6
-rwxr-xr-x 1 jim users 62193 Dec 29 11:53 libbrotlidec-811d1be3.so.1.0.6
-rwxr-xr-x 1 jim users 13161 Dec 29 11:53 libinih-c1f723f1.so.0
-rwxr-xr-x 1 jim users 33057 Dec 29 11:53 libINIReader-f49a69f4.so.0

When I enable EXIV2_ENABLE_WEBREADY they become:

-rwxr-xr-x 1 jim users 135913 Dec 29 12:11 libbrotlicommon-6ce2a53c.so.1.0.6
-rwxr-xr-x 1 jim users 62193 Dec 29 12:11 libbrotlidec-811d1be3.so.1.0.6
-rwxr-xr-x 1 jim users 21545 Dec 29 12:11 libcom_err-bb8268a4.so.2.1
-rwxr-xr-x 1 jim users 140977 Dec 29 12:11 libcrypt-52aca757.so.1.1.0
-rwxr-xr-x 1 jim users 3211825 Dec 29 12:11 libcrypto-0a45f796.so.1.1.1k
-rwxr-xr-x 1 jim users 619945 Dec 29 12:11 libcurl-0f8b79dc.so.4.5.0
-rwxr-xr-x 1 jim users 378601 Dec 29 12:11 libgssapi_krb5-99a927e0.so.2.2
-rwxr-xr-x 1 jim users 140377 Dec 29 12:11 libidn2-2f4a5893.so.0.3.6
-rwxr-xr-x 1 jim users 13161 Dec 29 12:11 libinih-c1f723f1.so.0
-rwxr-xr-x 1 jim users 33057 Dec 29 12:11 libINIReader-f49a69f4.so.0
-rwxr-xr-x 1 jim users 110065 Dec 29 12:11 libk5crypto-4a8d9571.so.3.1
-rwxr-xr-x 1 jim users 17929 Dec 29 12:11 libkeyutils-2777d33d.so.1.6
-rwxr-xr-x 1 jim users 1027097 Dec 29 12:11 libkrb5-e44f72d9.so.3.3
-rwxr-xr-x 1 jim users 85161 Dec 29 12:11 libkrb5support-f915a5d6.so.0.1
-rwxr-xr-x 1 jim users 73361 Dec 29 12:11 liblber-2-a32c7900.4.so.2.10.9
-rwxr-xr-x 1 jim users 360809 Dec 29 12:11 libldap-2-89849551.4.so.2.10.9
-rwxr-xr-x 1 jim users 174913 Dec 29 12:11 libnghttp2-fa6766b2.so.14.17.0
-rwxr-xr-x 1 jim users 547745 Dec 29 12:11 libpcre2-8-516f4c9d.so.0.7.1
-rwxr-xr-x 1 jim users 80217 Dec 29 12:11 libpsl-99becdd3.so.5.3.1
-rwxr-xr-x 1 jim users 138761 Dec 29 12:11 libsasl2-7de4d792.so.3.0.0
-rwxr-xr-x 1 jim users 195097 Dec 29 12:11 libselinux-64a010fa.so.1
-rwxr-xr-x 1 jim users 500177 Dec 29 12:11 libssh-8f1ecd37.so.4.8.7
-rwxr-xr-x 1 jim users 662769 Dec 29 12:11 libssl-a3869b75.so.1.1.1k
-rwxr-xr-x 1 jim users 1826161 Dec 29 12:11 libunistring-05abdd40.so.2.1.0

I suspect a good few of these could be omitted if there was no support for ssh: urls. I'd like to keep https: as so many web sites refuse to serve plain old http:, but I see little need to open ssh: URLs from a Python application. Python has its own ways of doing ssh anyway.

I realise you want to minimise the number of build options, and I'm only going to save a few megabytes at best, so feel free to reject this request. (-:

I'm almost certain there is no code directly related to SSH in the code-base of exiv2. These seem to be dependencies of libcurl.

Just on some random system:

$ ldd /usr/lib/libcurl.so.4

linux-vdso.so.1 (0x00007ffeec5e2000)

libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x000078124898a000)

libidn2.so.0 => /usr/lib/libidn2.so.0 (0x0000781248968000)

libssh2.so.1 => /usr/lib/libssh2.so.1 (0x000078124891f000)

libpsl.so.5 => /usr/lib/libpsl.so.5 (0x000078124890b000)

libssl.so.3 => /usr/lib/libssl.so.3 (0x000078124882b000)

libcrypto.so.3 => /usr/lib/libcrypto.so.3 (0x0000781248200000)

libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007812487d5000)

libzstd.so.1 => /usr/lib/libzstd.so.1 (0x000078124812d000)

libbrotlidec.so.1 => /usr/lib/libbrotlidec.so.1 (0x00007812487c6000)

libz.so.1 => /usr/lib/libz.so.1 (0x00007812487ac000)

libc.so.6 => /usr/lib/libc.so.6 (0x0000781247f4f000)

libunistring.so.5 => /usr/lib/libunistring.so.5 (0x0000781247d95000)

libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x0000781247cbd000)

libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x000078124877c000)

libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x0000781248776000)

libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x0000781248768000)

libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x0000781248761000)

libresolv.so.2 => /usr/lib/libresolv.so.2 (0x0000781248750000)

libbrotlicommon.so.1 => /usr/lib/libbrotlicommon.so.1 (0x0000781247c9a000)

/usr/lib64/ld-linux-x86-64.so.2 (0x0000781248ab1000)

So, I'm led to believe that You could bundle it with custom build libcurl, which was built WITHOUT --with-ssh or something like that. Please, check the https://curl.se/docs/install.html "Reducing size".

Note that you can disable curl (i.e. HTTPS and FTP) support w/ -DEXIV2_ENABLE_WEBREADY=ON -DEXIV2_ENABLE_CURL=OFF, in which case you'll only have the internal (unsafe?) HTTP implementation, or you can turn that off completely as well w/ -DEXIV2_ENABLE_WEBREADY=OFF.

@jim-easterbrook Please consider closing if you're happy with the answers. I don't believe there is anything further to do here from exiv2 point of view...

Yup, it looks as if I can't reduce the size much without losing https:// access, which is pretty much essential these days. Thanks for looking into it.

Image OP: remove the pipe invalidation on GUI focus/unfocus

Focus smart-assery is replaced by explicit "Edit" buttons in crop and perspective modules

Develop: de-implement dt_dev_from_gui

not used anywhere now

White balance/temperature.c: color picker: commit to history

Not sure what this thing did before but it stopped doing anything at some point.

Color picker: remove calls to _invalidate_from_gui

This internally calls pop_history_item, which makes no sense since the goal is only to set the dirty flag on pipe.

Color picker : replace the preview pipe hash hack by the cache_bypass API

Remove calls to dt_dev_invalidate_from_gui() which actually does a pop_history for no reason. Replace by dt_dev_invalidate_preview which only sets the right flags, then start a recompute.

Develop API hash API: directly get pipeline global_hash

Remove distort hash functions, since the global_hash takes distorsion and parameters into account

Some missed corrections I found today.

I am sorry this has to be in another PR, some translations only show when editing non-RAW images so I don't find them previously.

Thanks !

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage 0.2% Duplication on New Code

See analysis details on SonarCloud

import: Allow BMP files

Some scanners produce BMP images and we should allow user to import BMP.

Some scanners produce BMP images and we should allow user to import BMP.

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage 0.0% Duplication on New Code

See analysis details on SonarCloud

Tweak some translatable texts

Some word sequences are changed. This makes it easier to understand and write when translating to other languages.

po: Update zh_CN translation

I finally take a whole look at the zh_CN translation and correct a lot of wrong/misleading items in it. There may still be mistakes, I'll try to find them in future.

po: Update POTFILES.in and ansel.pot

Because we dropped run-from-build-dir in 3f1af97f1, the structure of build dir changed and generated files are under different paths, so this commit updates POTFILES.in to scan translatable texts in new paths.

  • For MacOS 13.3.1 / Clang 14.2
  • Add homebrew based installation and build scripts (made for exiv2 0.27.6!)
  • Some fixes related to macos / GTK on macos
  • Supress some SSE warnings

don’t forget to complete the credits in BUILD_hb.txt ;)

don’t forget to complete the credits in BUILD_hb.txt ;)

Will do. Thanks.

Quality Gate Passed Quality Gate passed

The SonarCloud Quality Gate passed, but some issues were introduced.

2 New issues 0 Security Hotspots No data about Coverage 1.3% Duplication on New Code

See analysis details on SonarCloud

Manually rebased and pushed. Thanks !

Update anselconfig default patterns

Corrected default file patterns : $(FILEFOLDER) was used instead of $(FILE.FOLDER)

exiv2 can be built either as shared library, or static, never both at the same time. Hence, requiring the dependencies needed for static linking to exiv2 when exiv2 was built shared does not make much sense, and it only adds unneeded bits to the exiv2 users.

Hence, fix both the pkg-config file, and the cmake config file, to require the dependencies needed for static linking only in case exiv2 was built as static library. This avoids e.g. having the expat and zlib development bits when using exiv2 in cmake with find_package(exiv2).

See the specific commit messages for longer descriptions.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (b4f145e) 63.89% compared to head (a1ff128) 63.89%.

@@ Coverage Diff @@
## main #2872 +/- ##
=======================================
 Coverage 63.89% 63.89% 
=======================================
 Files 103 103 
 Lines 22381 22381 
 Branches 10872 10872 
=======================================
 Hits 14301 14301 
 Misses 5857 5857 
 Partials 2223 2223 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Some distros ship both static and shared libs, how would you handle that? We still need a single .pc file that handles both. If you don't pass --static to pkg-config there is no harm done, no?

Some distros ship both static and shared libs,

Do you have examples of this? I checked Arch, Debian (which I maintain) and thus Ubuntu, Fedora, FreeBSD, Gentoo, Mageia, OpenBSD, openSUSE, Void, and other distros are derivatives of the ones mentioned (thus sharing the build configuration). All of the ones I inspected build using the default built type, which is shared, and no static bits are shipped (with the exception of libexiv2-xmp.a, gone in 0.28).

From what I can see (I also mentioned that in my commit messages), you can build only one variant at a time with cmake, and most likely the same with meson. Hence, if the situation you mention must be supported somehow, then I need to see exactly how and what for.

From what I can see (I also mentioned that in my commit messages), you can build only one variant at a time with cmake, and most likely the same with meson.

No, meson has always supported building both at the same time, and it does this quite well.

This is vital for cross-platform support since unix platforms frequently need this, and it works quite well there. It doesn't work quite so well on Windows (you either need to use .def files or compile every object twice, and then import libraries and static libraries both frequently use .lib) which is usually "this is fine" because on Windows you have to build private copies of your entire dep tree without benefit of a package manager.

And since CMake was primarily designed to work well on Windows, it assumes the Windows model and doesn't bother to support meson -Ddefault_library=both / autotools "--enable-static --enable-shared".

Distros such as Alpine and Debian often package both, when the build system supports both, but won't necessarily go out of their way to run cmake twice in order to build and install both.

won't necessarily go out of their way to run cmake twice in order to build and install both

OTOH, we frequently do run CMake twice on MSYS2 and install both artifacts.

Very true, MSYS2 has a somewhat stronger motivation to provide both.

WHAT DOES THIS DO?

This PR corrects the default file patterns: - $(FILEFOLDER) was used instead of $(FILE.FOLDER) - Other variables use a valid form but doesn't correspond to the auto-completion's patterns which is supposed to show good practice. (. is used to separate words in auto-completion, instead of a _ in the default patterns)

Quality Gate Failed Quality Gate failed

Failed conditions

4.7% Duplication on New Code (required ≤ 3%)

See analysis details on SonarCloud

In #2204 we discussed to improve the easyaccess documentation. Some text changes are covered bei #2706. Additionally I suggested to add some text in Wiki. As I am not able to do changes in Wiki, I asked who can do it, but this question in #2204 was not yet answered - seems it did not reach the right audience. So I put this question in a new issue here.

Here is an example from the slr-canon.xml and for an aps-c raw image that I tried with darktable. The lens correction for vignetting does not work. But if you move the vignetting data from the aps-c profile to the aps-h profile, then the lens correction takes place.

Screenshot_20231226_211744 20231225__145445.zip

I originally added the data for the APS-C version of this lens and this problem drove me crazy. I tried to fix it in various places but what I think is the correct solution is quite a lot of work and no one else was really affected by the problem.

Lensfun picks the APS-H data on purpose, apparently because it gives a slightly more accurate correction [1]. Unfortunately, it also means that for people like me, who have an APS-C camera but not an APS-H one [2], there's no way to add this data for our lens. One obvious solution to the problem is to stop doing that. I don't know how much this would reduce the accuracy but it would potentially change everyone's old edits (in Darktable, at least) and the lensfun maintainer at the time wasn't keen.

I wrote up what I think is the correct solution in a discussion issue here: https://github.com/lensfun/lensfun/discussions/1765 but it's a bit of work and I don't know if lensfun has enough of a community to get agreement on something like that before starting, so I haven't started.

At this point I'm not sure that changes in the lensfun code will ever be released anyway, so unless there's a pure database fix it may not help anyone anyway. I'd be happy to help finally get this fixed, but there'd need to be a plan to actually get any fix out to people.

Anyway, I hacked it locally, deleted the APS-H data and then lensfun will use my APS-C data correctly.

[1] Apparently the calibration is not so accurate near the edges, so you can a better result by calibrating on a slightly larger sensor where that region is hidden outside the image.

[2] And since APS-H is dead as a format, there's not much hope of this changing.

Add vignetting data for Canon EF 50mm f/1.8

upload ecc85a

This is an automatic backport of pull request #2832 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 7 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (4bd1984) 63.98%. Report is 8 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## 0.28.x #2870 +/- ##
==========================================
- Coverage 63.99% 63.98% -0.02% 
==========================================
 Files 103 103 
 Lines 22338 22340 +2 
 Branches 10821 10822 +1 
==========================================
- Hits 14296 14294 -2 
- Misses 5818 5822 +4 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

add tca data for Canon EF 50mm f/1.4 USM

upload b357c3

Add vignetting data for Sigma 15mm f/2.8 EX DG Diagonal Fisheye

upload b857c3

Update profile for Canon EF 28mm f/2.8

upload b857c3

Merge pull request #2134 from payano/compile_warnings

Remove some compile warnings, remove deprecated functions

I hope this modification is welcome. I think having non-ASCII chars in the source code can lead to unexpected behaviour in translation process (actually some sentences where not recognized in italian one with three dots (...), like Export.... Furthermore, even for source code editors, having mixed ASCII and non-ASCII spaces can be annoying for Search&Replace tasks. PO files can stay UTF-8 to make translator life easy with their non-ASCII chars.

This PR includes: - [x] Remove/Escape non-ASCII chars from source code - [x] Update po/ansel.pot using cd po/ && intltool-update -p -g ansel - [x] Remove old translations using

for f in *.po ; do
 msgattrib --translated --no-fuzzy --no-obsolete -o $f $f
done
  • [x] Improve italian translation `po/it.po
  • [x] Update and check if other translations have been broken, using cd po/ && intltool-update -g ansel -r

Broken translations

I run cd po/ && intltool-update -g ansel -r at master branch (807bd555bce901c2984386159f00f251a5b1760a) and at my PR.

Differences:

-af: 3115 translated messages, 772 fuzzy translations, 413 untranslated messages.
+af: 3099 translated messages, 789 fuzzy translations, 411 untranslated messages.

-ca: 1988 translated messages, 1313 fuzzy translations, 999 untranslated messages.
+ca: 1988 translated messages, 1314 fuzzy translations, 997 untranslated messages.

-cs: 3869 translated messages, 288 fuzzy translations, 143 untranslated messages.
+cs: 3847 translated messages, 310 fuzzy translations, 142 untranslated messages.

-da: 1984 translated messages, 1307 fuzzy translations, 1009 untranslated messages.
+da: 1984 translated messages, 1308 fuzzy translations, 1007 untranslated messages.

-de: 3834 translated messages, 335 fuzzy translations, 131 untranslated messages.
+de: 3812 translated messages, 358 fuzzy translations, 129 untranslated messages.

-el: 1254 translated messages, 1516 fuzzy translations, 1530 untranslated messages.
+el: 1254 translated messages, 1517 fuzzy translations, 1528 untranslated messages.

-eo: 3871 translated messages, 290 fuzzy translations, 139 untranslated messages.
+eo: 3849 translated messages, 312 fuzzy translations, 138 untranslated messages.

-es: 4109 translated messages, 134 fuzzy translations, 57 untranslated messages.
+es: 4081 translated messages, 162 fuzzy translations, 56 untranslated messages.

-fi: 3833 translated messages, 330 fuzzy translations, 137 untranslated messages.
+fi: 3811 translated messages, 353 fuzzy translations, 135 untranslated messages.

-fr: 4242 translated messages, 40 fuzzy translations, 18 untranslated messages.
+fr: 4203 translated messages, 78 fuzzy translations, 18 untranslated messages.

-gl: 231 translated messages, 1662 fuzzy translations, 2407 untranslated messages.
+gl: 231 translated messages, 1662 fuzzy translations, 2406 untranslated messages.

-he: 3831 translated messages, 332 fuzzy translations, 137 untranslated messages.
+he: 3809 translated messages, 355 fuzzy translations, 135 untranslated messages.

-hu: 3910 translated messages, 253 fuzzy translations, 137 untranslated messages.
+hu: 3888 translated messages, 276 fuzzy translations, 135 untranslated messages.

-it: 4242 translated messages, 39 fuzzy translations, 19 untranslated messages.
+it: 4262 translated messages, 24 fuzzy translations, 13 untranslated messages.

-ja: 3833 translated messages, 334 fuzzy translations, 133 untranslated messages.
+ja: 3811 translated messages, 356 fuzzy translations, 132 untranslated messages.

-nb: 1982 translated messages, 1299 fuzzy translations, 1019 untranslated messages.
+nb: 1982 translated messages, 1300 fuzzy translations, 1017 untranslated messages.

-nl: 3834 translated messages, 335 fuzzy translations, 131 untranslated messages.
+nl: 3812 translated messages, 358 fuzzy translations, 129 untranslated messages.

-pl: 3833 translated messages, 336 fuzzy translations, 131 untranslated messages.
+pl: 3811 translated messages, 359 fuzzy translations, 129 untranslated messages.

-pt_BR: 3833 translated messages, 326 fuzzy translations, 141 untranslated messages.
+pt_BR: 3811 translated messages, 348 fuzzy translations, 140 untranslated messages.

-pt_PT: 994 translated messages, 1598 fuzzy translations, 1708 untranslated messages.
+pt_PT: 994 translated messages, 1598 fuzzy translations, 1707 untranslated messages.

-ro: 440 translated messages, 1666 fuzzy translations, 2194 untranslated messages.
+ro: 440 translated messages, 1666 fuzzy translations, 2193 untranslated messages.

-ru: 3833 translated messages, 336 fuzzy translations, 131 untranslated messages.
+ru: 3811 translated messages, 359 fuzzy translations, 129 untranslated messages.

-sk: 2721 translated messages, 997 fuzzy translations, 582 untranslated messages.
+sk: 2706 translated messages, 1013 fuzzy translations, 580 untranslated messages.

-sl: 3833 translated messages, 330 fuzzy translations, 137 untranslated messages.
+sl: 3811 translated messages, 353 fuzzy translations, 135 untranslated messages.

-sq: 3733 translated messages, 400 fuzzy translations, 167 untranslated messages.
+sq: 3711 translated messages, 422 fuzzy translations, 166 untranslated messages.

-sr: 2837 translated messages, 962 fuzzy translations, 501 untranslated messages.
+sr: 2822 translated messages, 978 fuzzy translations, 499 untranslated messages.

-sr@latin: 2837 translated messages, 962 fuzzy translations, 501 untranslated messages.
+sr@latin: 2822 translated messages, 978 fuzzy translations, 499 untranslated messages.

-sv: 2246 translated messages, 1317 fuzzy translations, 737 untranslated messages.
+sv: 2233 translated messages, 1332 fuzzy translations, 734 untranslated messages.

-th: 268 translated messages, 1055 fuzzy translations, 2977 untranslated messages.
+th: 268 translated messages, 1057 fuzzy translations, 2974 untranslated messages.

-tr: 3833 translated messages, 326 fuzzy translations, 141 untranslated messages.
+tr: 3811 translated messages, 348 fuzzy translations, 140 untranslated messages.

-uk: 3833 translated messages, 330 fuzzy translations, 137 untranslated messages.
+uk: 3811 translated messages, 353 fuzzy translations, 135 untranslated messages.

-zh_CN: 4298 translated messages, 2 untranslated messages.
+zh_CN: 4265 translated messages, 32 fuzzy translations, 2 untranslated messages.

-zh_TW: 3826 translated messages, 332 fuzzy translations, 142 untranslated messages.
+zh_TW: 3804 translated messages, 354 fuzzy translations, 141 untranslated messages.

Nice, thanks

Hi @aurelienpierre, I updated the PR and put the before/after differences using intltool-update -g ansel -r. If that is the correct way to test the validity of the translations, my PR downgraded to fuzzy some entries in all translations. I fixed the Italian one, but it takes a little effort from other translators to fix them.

Is that acceptable to merge?

In the meantime, I continue to improve some entries in the Italian one.

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage 1.4% Duplication on New Code

See analysis details on SonarCloud

Conflicting PR coming at around the same time. I will manually fix conflicts in the next days if you don't.

I will do it tomorrow 👍🏻

Manually rebased with 62a38008b42d42996dfcfb8d79541d50ea863786 -> 2aedaf869ac9132a4c54eb6b07a13481e01bdc50. Please have a look.

Awesome! Thank you! Anyway, I think the uodate script added with cc40d8b93ec7e55453c325f77abff3ee5eed628b should not use --no-fuzzy because when sources changes, the intltool update could tag some valid entries fuzzy just to warn the user that something could be broken. Obsolete entries can be discarded, but fuzzy one should always be checked by translators. What do you think?

Yes, that makes sense, I'll change that.

Merge pull request #593 from LebedevRI/blacklevelseparate

Refactor blackLevelSeparate towards being an Array2DRef

RawImage: wrap blackLevelSeparate into a Array1DRef

We are going to need to be able to make the size dynamic later on anyway, and the current flat array is not expressive enough anyway.

Construct blackLevelSeparateStorage only when needed, does not exist by default

std::array is neither expressive-enough, nor wide-enough. It kind-of assumes that levels are specified for each color of Bayer (2x2) CFA, which means it can't really represent X-Trans (6x6) CFA, nor does it really work for non-CFA images (https://github.com/darktable-org/rawspeed/issues/215).

This only refactors it towards that goal, but does not actually add support for other cases.

If you used to have mRaw->blackLevelSeparate[c] in your code, a no-op refactoring is:

 assert(mRaw->blackLevelSeparate.width == 2 && mRaw->blackLevelSeparate.height == 2);
 auto blackLevelSeparate1D = *mRaw->blackLevelSeparate.getAsArray1DRef();
<...>
 blackLevelSeparate1D(c)

Codecov Report

Attention: 75 lines in your changes are missing coverage. Please review.

Comparison is base (f59f0ba) 59.01% compared to head (bb2a6cb) 58.90%.

Files Patch % Lines
src/librawspeed/common/RawImageDataU16.cpp 0.00% 25 Missing :warning:
src/librawspeed/common/RawImageDataFloat.cpp 0.00% 15 Missing :warning:
src/librawspeed/decoders/RawDecoder.cpp 0.00% 14 Missing :warning:
src/librawspeed/decoders/DngDecoder.cpp 54.54% 10 Missing :warning:
src/utilities/identify/rawspeed-identify.cpp 0.00% 7 Missing :warning:
src/librawspeed/decoders/RafDecoder.cpp 83.33% 0 Missing and 2 partials :warning:
src/librawspeed/adt/Array2DRef.h 0.00% 1 Missing :warning:
src/librawspeed/decoders/Cr2Decoder.cpp 83.33% 1 Missing :warning:
@@ Coverage Diff @@
## develop #593 +/- ##
===========================================
- Coverage 59.01% 58.90% -0.11% 
===========================================
 Files 247 247 
 Lines 14522 14609 +87 
 Branches 1987 1991 +4 
===========================================
+ Hits 8570 8606 +36 
- Misses 5834 5882 +48 
- Partials 118 121 +3 
Flag Coverage Δ
benchmarks 9.72% <3.33%> (-0.05%) :arrow_down:
integration 46.88% <46.66%> (+0.02%) :arrow_up:
linux 56.68% <46.66%> (-0.05%) :arrow_down:
macOS 20.26% <3.33%> (-0.09%) :arrow_down:
rpu_u 46.88% <46.66%> (+0.02%) :arrow_up:
unittests 18.06% <0.00%> (-0.11%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Bump DoozyX/clang-format-lint-action from 0.16 to 0.17

Bumps DoozyX/clang-format-lint-action from 0.16 to 0.17. - Release notes - Commits


updated-dependencies: - dependency-name: DoozyX/clang-format-lint-action dependency-type: direct:production update-type: version-update:semver-minor ...

Signed-off-by: dependabot[bot]

Bumps DoozyX/clang-format-lint-action from 0.16 to 0.17.

Sourced from DoozyX/clang-format-lint-action's releases .

v0.17

What's Changed

Changed clang-format version to be the latest (17) #57

Add clang-format 17.0.2 by @​alemuntoni in DoozyX/clang-format-lint-action#60

Allow directories with white-spaces to be passed by @​rjwignar in DoozyX/clang-format-lint-action#63

New Contributors

@​alemuntoni made their first contribution in DoozyX/clang-format-lint-action#60

@​rjwignar made their first contribution in DoozyX/clang-format-lint-action#63

Full Changelog : https://github.com/DoozyX/clang-format-lint-action/compare/v0.16.2...v0.17

v0.16.2

Add clang-format 16.0.3 (@​mirenradia )

v0.16.1

Fix clang-format-16 not executable

11b773b Update action.yml

1ecdb34 Update README.md

ad467e3 Merge pull request #63 from rjwignar/whitespacesTest

f6987b4 Merge pull request #60 from alemuntoni/master

5b3cd8c removed commented-out draft code from split_list_arg() and list_files()

d4653f9 added comment for regex pattern in split_list_arg()

fee86da refactored split_list_arg()

4cfac55 moved path normalization logic from split_list_arg() to normalize_paths()

bacf977 moved path normalization logic from list_files() to split_list_arg()

797fe4d extended split_list_arg to split by whitespaces while ignoring filepaths with...

Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


You can trigger Dependabot actions by commenting on this PR: - @dependabot rebase will rebase this PR - @dependabot recreate will recreate this PR, overwriting any edits that have been made to it - @dependabot merge will merge this PR after your CI passes on it - @dependabot squash and merge will squash and merge this PR after your CI passes on it - @dependabot cancel merge will cancel a previously requested merge and block automerging - @dependabot reopen will reopen this PR if it is closed - @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - @dependabot show ignore conditions will show all of the ignore conditions of the specified dependency - @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 2 lines in your changes are missing coverage. Please review.

Comparison is base (bc18c2a) 63.88% compared to head (81f7b57) 63.89%. Report is 7 commits behind head on main.

Files Patch % Lines
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
@@ Coverage Diff @@
## main #2869 +/- ##
==========================================
+ Coverage 63.88% 63.89% +0.01% 
==========================================
 Files 103 103 
 Lines 22369 22381 +12 
 Branches 10865 10872 +7 
==========================================
+ Hits 14291 14301 +10 
- Misses 5856 5857 +1 
- Partials 2222 2223 +1 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge remote-tracking branch 'upstream/pr/592' into develop

  • upstream/pr/592: Un-disable clang-tidy for MD5 stuff MD5Test: fix clang-tidy complaints MD5Benchmark: fix clang-tidy complaints clang-tidy: don't disable clang-tidy in tests, only specific checks .clang-format-ignore: ignore .clang-tidy clang-tidy: temporairly disable rawspeed-clang-tidy-module check CI: rawSpeed clang-tidy module integration CMake: RawSpeed clang-tidy module integration

Description of the bug

When I zoom while in Darkroom, the processing of highlights changes.

To Reproduce

  1. Click on an image to open 'Darkroom'
  2. Apply some modification with Tones
  3. Zoom in and zoom out to see different results
  4. Can happen also to see artifacts while moving in zoomed view: Image

Expected behavior

Results zoomed in/out without change colors/light result.

Context

Screenshots

Before Before

After After

UPDATE - Which commit introduced the error

After bisecting, the bug seems to be introduced at:

ba4dc399ea51bebb4cfeda2e081e981557cdf175 is the first bad commit
commit ba4dc399ea51bebb4cfeda2e081e981557cdf175
Author: Aurélien PIERRE 
Date: Wed Dec 20 21:56:43 2023 +0100

Pixelpipe cache: ensure the cache is used from the first run

src/develop/develop.c | 4 ++--
 src/develop/develop.h | 5 -----
 src/develop/pixelpipe_hb.c | 17 ++++++-----------
 3 files changed, 8 insertions(+), 18 deletions(-)

Anyway, also at the commit 43696fb645bc34c9c20a3cbc942b2af3e6cee727, sometime I experienced the artifacts produced while moving in zoomed view. No change of light, and not always.

UPDATE - System

Linux 6.2.0-39-generic #40~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC

ansel --version this is ansel 0.0.0+659~gba4dc399e copyright (c) 2009-2023 johannes hanika https://github.com/aurelienpierreeng/ansel/issues

compile options: bit depth is 64 bit normal build SSE2 optimized codepath enabled OpenMP support enabled OpenCL support enabled Lua support enabled, API version 8.0.0 Colord support enabled GraphicsMagick support enabled ImageMagick support disabled OpenEXR support enabled

Additional context

  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes

Hi @aurelienpierre, I did the bisect and updated the description.

Thanks ! I have a possible solution for that.

I can also reproduce this.

I've just reproduced it too

Yay, it works!

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (cd620ee) 59.00% compared to head (5b55a69) 59.01%.

@@ Coverage Diff @@
## develop #592 +/- ##
========================================
 Coverage 59.00% 59.01% 
========================================
 Files 247 247 
 Lines 14520 14522 +2 
 Branches 1987 1987 
========================================
+ Hits 8568 8570 +2 
 Misses 5834 5834 
 Partials 118 118 
Flag Coverage Δ
benchmarks 9.76% <50.00%> (-0.01%) :arrow_down:
integration 46.86% <0.00%> (-0.02%) :arrow_down:
linux 56.73% <100.00%> (+<0.01%) :arrow_up:
macOS 20.35% <100.00%> (ø)
rpu_u 46.86% <0.00%> (-0.02%) :arrow_down:
unittests 18.17% <50.00%> (+<0.01%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Because we dropped run-from-build-dir in https://github.com/aurelienpierreeng/ansel/commit/3f1af97f1344561ead0a32cb57ac2dc5ac32b6e7, the structure of build dir changed and generated files are under different paths, so this commit updates POTFILES.in to scan translatable texts in new paths.

I finally take a whole look at the zh_CN translation and correct a lot of wrong/misleading items in it. There may still be mistakes, I'll try to find them in future.

The bot is finding issues from code that I never touched in this PR I think.

Quality Gate Passed Quality Gate passed

The SonarCloud Quality Gate passed, but some issues were introduced.

1 New issue 0 Security Hotspots No data about Coverage 0.0% Duplication on New Code

See analysis details on SonarCloud

Thanks !

I'm experimenting with building libexiv2 v0.28.1 on Windows 7 (the only Windows machine I have). I notice that after compiling and installing the directory build-msvc/bin has a copy of libcurl.dll as well as exiv2.dll, but the directory build-msvc/install/bin only has exiv2.dll.

Shouldn't libcurl.dll (and any other non-system runtime dependencies) get copied as part of the "install" process? I believe curl is included in recent Windows, so this probably doesn't matter to most users.

Add history item: do not update mask GUI

It's overkill in some cases, though it allows to factorize some blending_gui.c code. The real problem is it resets color pickers, which conflicts with the module's color_picker_apply() methods that actually commit results to history.

found in darktable (https://github.com/darktable-org/darktable/issues/15934) situation: i created lens correction data for Canon EF 100-400mm f/4.5-5.6L IS II USM + 1.4x and added them to local lensfun database: slr-canon.xml:

...

Canon
 Canon EF 100-400mm f/4.5-5.6L IS II USM + 1.4x
 Canon EF 100-400mm f/4.5-5.6L IS II USM + 1.4x
 Canon EF 100-400mm f/4.5-5.6L IS II USM + 1.4x
 Canon EF
 1

...

exiv2 -pa --grep lens/i identifies it quite ok:

exiv2 -pa --grep lens/i ~/Pictures/Import_20230723/20230722-IMG_0416.CR3
Exif.Photo.LensSpecification Rational 4 140/1 560/1 0/1 0/1
Exif.Photo.LensModel Ascii 39 EF100-400mm f/4.5-5.6L IS II USM +1.4x
Exif.CanonCs.LensType Short 1 Canon EF 100-400mm f/4.5-5.6L IS II USM + 1.4x
Exif.CanonCs.Lens Short 3 140.0 - 560.0 mm
Exif.Canon.LensModel Ascii 138 EF100-400mm f/4.5-5.6L IS II USM +1.4x

when activating lens correction, the lens is not properly detected: 11,1025 Trouble: [Objektivkorrektur] camera/lens not found (20230722-IMG_0416.CR3 56279)

now the strange part:

if i rename the LensType via .exiv2 file:

[canon]
748=an arbitrary value

and do the same with lensfun


Canon
 an arbitrary value

it's properly detected and the correction is applied but also ditching the blank between EF and 100-400 via .exiv2 and in slr-canon.xml (—> „Canon EF100-400mm f/4.5-5.6L IS II USM + 1.4x“) gives a proper result ;)

any ideas?

Seems like it might have some overlap with #2052 -- the fuzzy matcher is a very strange animal, ripe for improvement.

Merge pull request #591 from LebedevRI/oss-fuzz

Address some fuzzer thingies

LJpegDecompressor: do check that the tile is not narrower than frame cps

We could support those weird-width tiles just like we do other weird-width tiles, but then the predictor handling will need to be redone, because it will try to copy from missing columns.

LJpegDecompressor: drop bogus invariant check between frame cps and cpp

There is no particular LJpeg reason as to why we can't encode (e.g.) all 3 components of demosaiced image with the same predictor. That's stupid, but technically fine.

DngDecoder::parseWhiteBalance(): ASSHOTWHITEXY requires 3*3 color matrix

Perhaps there is a generalization there, but it would be good to have a motivational sample for it first.

ArwDecoder::DecodeLJpeg(): check for tile input overlap

This isn't needed for correctness, but there is no sane reason as to why input buffers for different tiles should ever overlap. This mainly helps fuzzing.

ArwDecoder::DecodeLJpeg(): avoid integer overflow when recalculating tile width

Multiplication can overflow, naturally, so we need to use wider type.

We can't really go roundabout here, because we don't know that all the tiles are full. (see code comment)

Codecov Report

Attention: 11 lines in your changes are missing coverage. Please review.

Comparison is base (92d6ec9) 59.00% compared to head (a56a4b3) 59.00%.

Files Patch % Lines
src/librawspeed/decoders/ArwDecoder.cpp 60.00% 4 Missing :warning:
src/librawspeed/decoders/DngDecoder.cpp 20.00% 4 Missing :warning:
...rc/librawspeed/decompressors/LJpegDecompressor.cpp 40.00% 3 Missing :warning:
@@ Coverage Diff @@
## develop #591 +/- ##
========================================
 Coverage 59.00% 59.00% 
========================================
 Files 247 247 
 Lines 14510 14520 +10 
 Branches 1983 1987 +4 
========================================
+ Hits 8561 8568 +7 
- Misses 5831 5834 +3 
 Partials 118 118 
Flag Coverage Δ
benchmarks 9.76% <4.76%> (-0.01%) :arrow_down:
integration 46.87% <52.63%> (+0.01%) :arrow_up:
linux 56.72% <52.63%> (+<0.01%) :arrow_up:
macOS 20.35% <0.00%> (-0.02%) :arrow_down:
rpu_u 46.87% <52.63%> (+0.01%) :arrow_up:
unittests 18.16% <4.76%> (-0.02%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

Import Session : Removed issues for Windows and some other code adjustments #294

Draft

Jiyone

wants to merge 14 commits into

aurelienpierreeng :master

Could not load branches

Branch not found: {{ refName }}

Could not load tags

Nothing to show

Are you sure you want to change the base?

Some commits from the old base branch may be removed from the timeline, and old review comments may become outdated.

from

Jiyone :win_import

Draft

Import Session : Removed issues for Windows and some other code adjustments

294

Jiyone

wants to merge 14 commits into

aurelienpierreeng :master

from

Jiyone :win_import

Conversation

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Copy link

Collaborator

Jiyone

commented

Dec 22, 2023

Management of "/" and "" was broken on Windows. Escape of $ using "$" removed: "$" will be written if it's not followed by a "(". Else, it's a $(thing).

"/" are replaced by the correct directory separator on Windows.

The GUI's folder separators between the different fields in Import are correct on Windows

Callback added to jobcode field, so test_path updates after editing it;

The "test path" building code is simplified

It doesn't create folders each time it refreshes anymore.

Jiyone

force-pushed the

win_import

branch 3 times, most recently from

to

Compare

December 23, 2023 17:22

Jiyone

force-pushed the

win_import

branch from

to

Compare

December 23, 2023 18:09

Jiyone

added 2 commits

December 24, 2023 16:23

Jiyone

force-pushed the

win_import

branch 2 times, most recently from

to

Compare

December 26, 2023 17:25

Jiyone

force-pushed the

win_import

branch from

to

Compare

December 26, 2023 17:41

Jiyone

changed the title Import Session : Removed issues for Windows

Import Session : Removed issues for Windows and some other code adjustments

Dec 26, 2023

Jiyone

marked this pull request as draft

December 28, 2023 00:27

Jiyone

added 3 commits

December 28, 2023 02:37

Jiyone

force-pushed the

win_import

branch from

to

Compare

December 28, 2023 02:25

Jiyone

and others added 2 commits

December 28, 2023 03:28

Jiyone

force-pushed the

win_import

branch from

to

Compare

December 29, 2023 11:20

This was referenced Jan 2, 2024

Import adds an invalid empty folder #291

Open

No copy on windows #289

Open

Jiyone

and others added 3 commits

January 2, 2024 13:58

Copy link

sonarcloud bot

commented

Jan 2, 2024

Quality Gate failed

Failed conditions

1 Security Hotspot 4.6% Duplication on New Code (required ≤ 3%)

See analysis details on SonarCloud

Sign up for free to join this conversation on GitHub . Already have an account? Sign in to comment

Labels

None yet

1 participant

Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.

You can’t perform that action at this time.

Add a FIXME comment on possible deadlock over image cache in variables expansion

Darkroom size configure: do not apply pixel DPI on image buffer

Image resolution scaling is handled by PPD, not DPI. Double up otherwise.

Since some versions the Scopes module is in the left panel under the Preview/NAvigation Module. I would like to see the Scopes Module to come back on the right top corner. I like to see the Scopes on top of the adjusting modules to have a visual control closer to the handles/numbers i am changing.

AtB martin

Scopes eventually will become a module in the pipeline.

Scopes eventually will become a module in the pipeline.

Really? I think it should be easier to access at a fixed place. And it doesn't affect output result, pipeline is about computing the result, right?

Bringing it into the pipeline as a module will allow to sample values at any point in the pipeline.

Description of the bug

While working with Rotate and Perspective module in Ansel Version ca-74e8a (Win10, i7-2600, Geforce GTX 1050ti) there happened some strange behavior to the background in Darkroom Atelier. See Screenshot:

Clipboard01_Rotate

Aswell the Mouse Pointer is misaligned, i am not able to grab the handle. See:

Clipboard02_Rotate

AtB martin

Working now with Version a23b63a-win-exe.

Merry X-mas

Happy new year !

Description of the bug

When you request an import File > Import ("Copy to disk"), an empty folder is created with the current date and the previous "Project name"

To Reproduce

  1. Go to "File > Import"
  2. Select location
  3. Click on any file
  4. See import folder - an empty folder has been created

Expected behavior

The folder should be created after clicking the Import button

Screenshots

image

System

  • Ansel version : 5b98f93
  • OS : linux 6.6.6-200.fc39.x86_64
  • Linux - Distro : Fedora Linux 39 (Workstation Edition)

WIP in #294

Pipeline hash : make it more deterministic

Remove the contribution of some GUI params (now handled by explicitly disabling cache).

Now we can compute the hash of a pipe module from outside the module, so we can get a cache line from GUI if needed.

Merge pull request #590 from LebedevRI/sonar

Handle some new Sonar issues

Bauhaus: remove combobox informative sections

This is the poster case of solving bloat with more bloat.

Fix #288

Codecov Report

Attention: 33 lines in your changes are missing coverage. Please review.

Comparison is base (c2fd1a5) 58.98% compared to head (69601a0) 59.00%.

Files Patch % Lines
src/librawspeed/decoders/Cr2Decoder.cpp 23.07% 20 Missing :warning:
src/librawspeed/metadata/Camera.cpp 59.37% 8 Missing and 5 partials :warning:
@@ Coverage Diff @@
## develop #590 +/- ##
===========================================
+ Coverage 58.98% 59.00% +0.01% 
===========================================
 Files 247 247 
 Lines 14500 14510 +10 
 Branches 1981 1983 +2 
===========================================
+ Hits 8553 8561 +8 
- Misses 5829 5831 +2 
 Partials 118 118 
Flag Coverage Δ
benchmarks 9.77% <35.00%> (+0.01%) :arrow_up:
integration 46.86% <51.92%> (+0.01%) :arrow_up:
linux 56.71% <50.94%> (+0.01%) :arrow_up:
macOS 20.36% <27.11%> (-0.01%) :arrow_down:
rpu_u 46.86% <51.92%> (+0.01%) :arrow_up:
unittests 18.17% <35.00%> (+0.02%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Pixelpipe: init cache sizes: account for the case without GUI (CLI).

Now, why we start a preview pipe in CLI is beyond me…

Darkroom: remove superfluous pipe recomputes on nodes cleanup when leaving.

Why ?

Pipeline cache: add an API to temporarily disable cache starting at a module.

Handle mask previews properly, aka without flushing the whole cache, and without creating a nightmare of spaghetti.

Merge pull request #589 from LebedevRI/ci

CI: macOS: XCode 15.1 is avaliable

Merge pull request #588 from LebedevRI/buffers

Say "no" to pointer arithmetic!

CMake: disable -Wunsafe-buffer-usage for Clang 16

It pointed at the variable declaration and not the arrithmetic itself, which breaks silencing.

RawImageData::fixBadPixelsThread(): avoid type punning / pointer arithmetics

UncompressedDecompressor::decode12BitRawWithControl(): avoid pointer arithmetics

UncompressedDecompressor::decode12BitRawUnpackedLeftAligned(): avoid pointer arithmetics

JpegDecompressor: silence warning about pointer arithmetics

It's horrible, but i don't believe anything could be done about it.

This has been a long road.

Codecov Report

Attention: 72 lines in your changes are missing coverage. Please review.

Comparison is base (579ea88) 59.01% compared to head (a45e486) 59.03%.

Files Patch % Lines
src/librawspeed/common/RawImageDataU16.cpp 18.51% 22 Missing :warning:
...awspeed/decompressors/UncompressedDecompressor.cpp 0.00% 15 Missing :warning:
src/librawspeed/decoders/NefDecoder.cpp 0.00% 13 Missing :warning:
src/librawspeed/common/RawImageDataFloat.cpp 0.00% 7 Missing :warning:
src/librawspeed/decoders/ArwDecoder.cpp 50.00% 7 Missing :warning:
src/librawspeed/common/TableLookUp.cpp 62.50% 3 Missing :warning:
src/librawspeed/decoders/Cr2Decoder.cpp 0.00% 3 Missing :warning:
src/librawspeed/decoders/CrwDecoder.cpp 0.00% 1 Missing :warning:
src/librawspeed/decoders/DngDecoder.cpp 0.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #588 +/- ##
===========================================
+ Coverage 59.01% 59.03% +0.01% 
===========================================
 Files 247 247 
 Lines 14502 14502 
 Branches 1980 1981 +1 
===========================================
+ Hits 8559 8561 +2 
 Misses 5823 5823 
+ Partials 120 118 -2 
Flag Coverage Δ
benchmarks 9.82% <35.57%> (-0.01%) :arrow_down:
integration 46.84% <44.02%> (+0.01%) :arrow_up:
linux 56.70% <48.25%> (+0.03%) :arrow_up:
macOS 20.46% <36.30%> (+<0.01%) :arrow_up:
rpu_u 46.84% <44.02%> (+0.01%) :arrow_up:
unittests 18.15% <0.69%> (-0.01%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #587 from LebedevRI/cmake

CMake: split up monolithic rawspeed lib, link it together from smaller libs

CMake: split up monolithic rawspeed lib, link it together from smaller libs

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (c5e3e26) 58.40% compared to head (406ca1c) 59.01%.

@@ Coverage Diff @@
## develop #587 +/- ##
===========================================
+ Coverage 58.40% 59.01% +0.61% 
===========================================
 Files 247 247 
 Lines 14481 14502 +21 
 Branches 1968 1980 +12 
===========================================
+ Hits 8457 8559 +102 
+ Misses 5904 5823 -81 
 Partials 120 120 
Flag Coverage Δ
benchmarks 9.82% <ø> (+0.31%) :arrow_up:
integration 46.82% <ø> (ø)
linux 56.66% <ø> (ø)
macOS 20.46% <ø> (+1.28%) :arrow_up:
rpu_u 46.82% <ø> (ø)
unittests 18.15% <ø> (+0.43%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Description of the bug

strange artefact display on export

Two different exports 1080-_DSC4800 1080-_DSC4800_01

To Reproduce

Unknown

Expected behavior

Same as preview/export

Context

https://heloworld.me/nc/s/DT9b2nEQdP6e7nG

Screenshots

Ok in darktoom image

Ko in lighttable (the nef and xmp is from the second dark one) image

System

Ansel version : https://github.com/aurelienpierreeng/ansel/commit/9163b3e11240522794a23cb0c7650dbefc01a33e

OS : Édition Windows 10 Professionnel Version 22H2 Build du système d’exploitation 19045.3803 Expérience Windows Feature Experience Pack 1000.19053.1000.0 Processeur Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz 2.80 GHz Mémoire RAM installée 16,0 Go (15,9 Go utilisable) Type du système Système d’exploitation 64 bits, processeur x64

Graphics card : Intel(R) HD Graphics 630 & NVIDIA GeForce GTX1050

Graphics driver : Intel 27.20.100.9171 & NVIDIA 31.0.15.2879

OpenCL installed : Par défaut

The xmp re-imported in darktable 4.4.2 display and export all OK I have made a copy of the folder and will keep it if you want other datas

Should be fixed by 2682f6c, but the nightly build didn't build. Re-open if not.

Fixed for export, the thumbnail are still broken, sometime black, sometimes with the same artefact. Other bug ?

Description of the bug

When asking to make import as File> Import > copy to disk It's only display "result of the pattern:" without other stuff after the progress still display 0/xx When the import finish, nothing imported

To Reproduce

  1. Go to "Fichier> Importer"
  2. Click on "File handling > Copy to disk"
  3. Fill jobcode or not
  4. Dossier de base du projet on local disk
  5. Motif de nommage du dossier projet "$(EXIF.YEAR)-$(EXIF.MONTH)-$(EXIF.DAY)"
  6. Motif de nommage des fichiers "$(FILE.NAME).$(FILE.EXTENSION)"
  7. Select some files to better see that the import will do nothing
  8. Click on "Import"

Expected behavior

DIrectory created with exif timestamp, files exists on local disk

System

  • Ansel version : 9163b3e
  • OS : Édition Windows 10 Professionnel Version 22H2 Build du système d’exploitation 19045.3803 Expérience Windows Feature Experience Pack 1000.19053.1000.0 Processeur Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz 2.80 GHz Mémoire RAM installée 16,0 Go (15,9 Go utilisable) Type du système Système d’exploitation 64 bits, processeur x64

  • Graphics card : Intel(R) HD Graphics 630 & NVIDIA GeForce GTX1050

  • Graphics driver : Intel 27.20.100.9171 & NVIDIA 31.0.15.2879
  • OpenCL installed : Par défaut

Additional context

  • Tried with external memory card XQD, SD, internal SD reader, or copy files from and to local C drive
  • Can you reproduce with another darktable version(s)?
  • It's not new, not tested before the use of the native file chooser version
  • Can you reproduce with a RAW or Jpeg or both? All image types
  • Do you use lua scripts? None

https://matrix.to/#/!PjYYxhSUnNvgWkspze:matrix.org/$PGiqFrOFO0cXsTuvLtAOiAN1pL2AwheBCnQnRyxaDdY?via=matrix.org&via=ungleich.ch&via=club1.fr

Did you select a root project directory in import popup ? Any accents in pathes ?

Yes, filling jobcode doesn't change stuff

image

WIP in #294

2

auto value = ed["Exif.Photo.UserComment"]; auto str= value2.toString();//garbled

From the exiftool -v3 dump, it doesn't look like this comment is stored correctly:

 | | 0) UserComment = UNICODECinematic still of cat buying fish with money in paws, s[snip]
 | | - Tag 0x9286 (826 bytes, undef[826]):
 | | 004a: 55 4e 49 43 4f 44 45 00 00 43 00 69 00 6e 00 65 [UNICODE..C.i.n.e]
 | | 005a: 00 6d 00 61 00 74 00 69 00 63 00 20 00 73 00 74 [.m.a.t.i.c. .s.t]
 | | 006a: 00 69 00 6c 00 6c 00 20 00 6f 00 66 00 20 00 63 [.i.l.l. .o.f. .c]
 | | 007a: 00 61 00 74 00 20 00 62 00 75 00 79 00 69 00 6e [.a.t. .b.u.y.i.n]
 | | 008a: 00 67 00 20 00 66 00 69 00 73 00 68 00 20 00 77 [.g. .f.i.s.h. .w]
 | | [snip 746 bytes]

UNICODE type specifies UTF-8, whereas here it looks like UTF-16 is stored.

Minor adjustment to another test result

Signed-off-by: Jim Easterbrook

Update test reference files

Signed-off-by: Jim Easterbrook

Update tests with Exif.CanonLe.LensSerialNumber

All of them have all-zero serial numbers, so they aren't testing this PR directly.

Signed-off-by: Jim Easterbrook

Decode Exif.CanonLe.LensSerialNumber

The first 5 bytes of the CanonLe block give the serial number when converted to hexadecimal. This PR also fixes bug 2138 as it stops the 30 byte length of the block being truncated to a multiple of 4 bytes.

Signed-off-by: Jim Easterbrook

CanonLe block has no size element

This is what has been causing corruption in my file writing by reducing the block from 30 bytes to 28, offsetting subsequent data by two bytes.

Signed-off-by: Jim Easterbrook

Merge pull request #586 from LebedevRI/unchecked-optional-access

Fix a few cases of std::optional mishandling

Ooops...

Codecov Report

Attention: 5 lines in your changes are missing coverage. Please review.

Comparison is base (6305e5f) 58.40% compared to head (f14ef00) 58.40%.

Files Patch % Lines
src/librawspeed/decoders/Cr2Decoder.cpp 37.50% 5 Missing :warning:
@@ Coverage Diff @@
## develop #586 +/- ##
========================================
 Coverage 58.40% 58.40% 
========================================
 Files 247 247 
 Lines 14481 14481 
 Branches 1964 1964 
========================================
 Hits 8457 8457 
 Misses 5904 5904 
 Partials 120 120 
Flag Coverage Δ
benchmarks 9.51% <0.00%> (ø)
integration 46.82% <50.00%> (ø)
linux 56.66% <50.00%> (ø)
macOS 19.18% <0.00%> (ø)
rpu_u 46.82% <50.00%> (ø)
unittests 17.71% <0.00%> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

VC5Decompressor: PrefixCodeDecoder is late-init'd

HighPassBand should take reference to optional...

Merge remote-tracking branch 'upstream/pr/585' into develop

  • upstream/pr/585: rstest: wrap argv handling into Array1DRef rsbench: wrap argv handling into Array1DRef MD5: internally operate on Array1DRef / CroppedArray1DRef, not pointer arithmetic CroppedArray1DRef: add addressOf() function, rewrite begin()/operator() in it's terms Array1DRef: add addressOf() function, rewrite begin()/end()/operator() in it's terms MD5: compress(): take Array1DRef, not a raw pointer MD5Test: rewrite CheckTestCaseSetInParts in terms of std::string_view MD5: hash_to_string(): avoid type punning / pointer arithmetics Array1DRef: support const-preserving conversion from Array1DRef to Array1DRef rs-identify: wrap argv handling into Array1DRef Cr2Decompressor: make predNext operate on plain Array1DRef CroppedArray1DRef(): return non-const pointer from begin() Cr2Decompressor: avoid pointer manipulations in predNext CroppedArray1DRef: un-const the inner Array1DRef libFuzzer_dummy_main: wrap argv handling into Array1DRef RawImageDataU16::setWithLookUp(): avoid type punning / pointer arithmetics splitString(): rewrite in terms of std::string_view splitString(): add a few more tests Array2DRef is bounds-checked already Array1DRef is bounds-checked already

Description of the bug

Ansel crash with SegFault when the "Settings" field in the white balance module is filled in with the name of the camera.

To Reproduce 1. Open "white balance module" 2. Select your camera body in the settings combobox.

Expected behavior

This line should only be decorative.

Commits ba4dc39 , maybe previous

System

  • Bug discovered on GNU/Linux Mint 21.2
  • Confirmed on Ubuntu 22.04.3

Additional context

  • I'm just confirming a bug discovered by another person.
  • No other preset in the list causes the crash (at least for me).
  • I've attached the crash dump to this report

ansel-test_231220-231159.log

CroppedArray1DRef: add addressOf() function, rewrite begin()/operator() in it's terms

... so that begin() are well-defined for numElts=0.

Array1DRef: add addressOf() function, rewrite begin()/end()/operator() in it's terms

... so that begin()/end() are well-defined for numElts=0.

Cr2Decompressor: make predNext operate on plain Array1DRef

A simple pointer is less state than a pointer and an offset. Here is somewhat matters.

Darkroom: ensure the pipe gets computed when entering

Rework again the control sequence of pipes, handling pipe restart internally in case of shutdown. This guarantees faster restarts with the new logic of pipes waiting for each other.

Description of the bug

The eyedropper frame size and position is not "saved" when the last action before switching OFF the eyedropper view was to reset its frame to full size using right-click.

Also, the pipe doesn't recompute after resetting an eyedropper frame, I believe this would be related.

To Reproduce

  1. Click on an eyedropper button.
  2. Draw a zone in the picture.
  3. Right-click on the eyedropper's frame to reset, the selection is now on all the picture. (side effect : the pipeline doesn't recompute)
  4. Switch the eyedropper button OFF, the zone delimitation disappears.
  5. Switch the eyedropper button ON.
  6. See the zone delimitation is now the same as after step 2.

Expected behavior

The pipe should recompute after an eyedropper's frame reset. The size and position of the frame should be recorded when the button switched OFF and reloaded later.

System

  • Ansel version : at least 9beb399
  • OS : Win11 22H2

I'm seeing similar behavior in ba4dc39 on Linux. Clicking an eyedropper button briefly shows a zone frame, but then the frame quickly disappears.

That should be fixed now ?

yep thanks

GUI pipelines : force them to wait for each other

Allowing GUI pipelines to run concurrently would be nice with several GPUs. Problem is not all modules use OpenCL, so both pipes would still have to share CPU, which is a bottleneck.

Forcing pipelines to run in sequence and to wait for each other not only reduces the individual wall-clock runtime, but also the cumulative one, meaning users will get a full GUI refresh (much) sooner, especially on weak hardware.

That may also allow to (re)build the pipeline nodes just once for both pipes in the future.

Rework the pipe shutdown mechanism

  1. be more aggressive on killswitches
  2. invalidate the cache if killswitch is fired after module process()
  3. write the atomic flag only from GUI thread
  4. do not loop in case of invalid pipe output but exit the thread job ASAP. Another pipe is launched from GUI thread anyway.

What is the current problem you are facing ?

The pipe doesn't recompute after : - deleting a module instance - resetting the history module by clicking on the module's reset button (Edit > delete development works fine)

... and keep showing the image as if no actions were made, until next user action that would refresh the view. We have to manually do Run > Clear all pipeline caches, or do any other action that would recompute the pipe (resetting the module before deleting it, for example).

Where in your workflow does your problem occur ? Each time I have to delete a module or reset the history.

Darkroom: zoom pipeline invalidation should not happen on user event if modules are capturing them

Gradient slider (parametric mask): align GUI interactions with Bauhaus

  1. commit params on button released
  2. use a delayed commit on incremental user input (dragging, scrolling).

Bauhaus: rework the timeout logic

Commit history after 350 ms no matter what. If another user interaction happens in those 350 ms, destroy the previous timer and start again. That leaves 350 ms to user to complete another scrolling step.

May not be enough for senior citizens, and too much for gamers, in which case this needs to be an user config.

add history item: collapse entries if only ON/OFF was toggled. Use integrity hash for param detection.

Add a module hash for module vs. history states validation and consistency. Re-use it in pipeline hash.

Fix #285

Module GUI on/off state init : handle in GUI code

Was previously part of history API. Wrong scope.

Blend ops: ensure all GUI controls are updated when switching history states

Dev: handle pipe invalidation and reprocessing in dt_dev_pop_history_items

Fix undo events. This function is high-level enough to consider it GUI.

commit blend params: handle it in history API

Doesn't belong to pipeline API.

Pop history : always assume IOP list changed

Copying the GList of IOPs and browsing them twice to check if they are equal doesn't yield substantial speed-ups compared to blindly reconstructing a pipeline from scratch.

Merge pull request #584 from LebedevRI/md5

MD5 hasher QOL refactor (-2%)

Slap buffering ontop of MD5 hasher interface (-1%)

Brute-ness of the simple md5_hash() iface came up in e.g. e7e245cdb65888ed805c51f5eb6ba1cd791045cf - we might want to hash data that is generally sequential, but has some padding between some parts of it, and that's not really possible with the old iface.

Humorously, the new code is faster just a tiny bit.

MD5: further extract buffering into a separate data structure (-1%)

The original buffering implementation forced even the full buffer to be copied to the "coalescing" buffer, which seems unoptimal.

Abstracting that away helps avoid that extra copy, and is humorously, just a tiny bit faster yet again.

Support hashing of several buffers as-if they are consequential, while being just a tiny bit faster. Not really important, but may be nice to have.

Codecov Report

Attention: 102 lines in your changes are missing coverage. Please review.

Comparison is base (3f47d7f) 58.68% compared to head (5b3dc7c) 58.40%.

Files Patch % Lines
src/utilities/rstest/md5.h 25.78% 95 Missing :warning:
src/utilities/rstest/md5.cpp 77.77% 4 Missing :warning:
src/utilities/rstest/MD5Benchmark.cpp 80.00% 2 Missing and 1 partial :warning:
@@ Coverage Diff @@
## develop #584 +/- ##
===========================================
- Coverage 58.68% 58.40% -0.29% 
===========================================
 Files 246 247 +1 
 Lines 14330 14452 +122 
 Branches 1952 1963 +11 
===========================================
+ Hits 8410 8440 +30 
- Misses 5802 5894 +92 
 Partials 118 118 
Flag Coverage Δ
benchmarks 9.48% <34.30%> (+0.87%) :arrow_up:
integration 46.82% <62.50%> (-0.04%) :arrow_down:
linux 56.66% <67.36%> (+0.02%) :arrow_up:
macOS 19.15% <8.87%> (-0.06%) :arrow_down:
rpu_u 46.82% <62.50%> (-0.04%) :arrow_down:
unittests 17.61% <29.65%> (+0.01%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Description of the bug

I have noticed that the history module adds new items on top of the current selected item in the list but there is a strange exception to this:

When you have an anterior item selected to go backward in the history: if there is at least one deactivate or/and activate module item above the current position, and if you do any change in the picture, then the new event is added from the top of the list and without deleting items above the current selected item.

This cause to reapply all the events above the current position because the current position change to the top of the list without deleting the unwanted items.

To Reproduce

  1. Edit a picture
  2. Deactivate at least one module near the end.
  3. Click on an item of the history list in the history module, that must be bellow a module deactivation item. image
  4. Note which number has been chosen.
  5. Do an edit.
  6. See the new item is added at the top line +1 and all items that were above the chosen item at step 3. have not been removed image (note: the item 19 color equalizer disappeared because that module was active once item 18 was selected)

Expected behavior

Anything above the current item should be removed before adding new history line. image

Which commit introduced the error

The commit that now shows all the history items in the history module just been merged. (a518023b0a8414c0a394702f7456877ced185859)

System

  • darktable version : a518023b0a8414c0a394702f7456877ced185859
  • OS : Win11 22H2
  • Memory : more than granny
  • Graphics card : Intel
  • Graphics driver : Intel
  • OpenCL installed : yes
  • OpenCL activated : yes
  • GTK+ : 3
  • gcc : yes
  • cflags : see build.sh
  • CMAKE_BUILD_TYPE : the one in build.sh

Additional context

  • Can you reproduce with another darktable version(s)? no
  • Can you reproduce with a RAW or Jpeg or both? both
  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes
  • Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? yes
  • Do you use lua scripts? no

Bump actions/download-artifact from 3 to 4

Bumps actions/download-artifact from 3 to 4. - Release notes - Commits


updated-dependencies: - dependency-name: actions/download-artifact dependency-type: direct:production update-type: version-update:semver-major ...

Signed-off-by: dependabot[bot]

Bump actions/upload-artifact from 3 to 4

Bumps actions/upload-artifact from 3 to 4. - Release notes - Commits


updated-dependencies: - dependency-name: actions/upload-artifact dependency-type: direct:production update-type: version-update:semver-major ...

Signed-off-by: dependabot[bot]

Bump github/codeql-action from 2 to 3

Bumps github/codeql-action from 2 to 3. - Release notes - Changelog - Commits


updated-dependencies: - dependency-name: github/codeql-action dependency-type: direct:production update-type: version-update:semver-major ...

Signed-off-by: dependabot[bot]

Remove deprecated function calls.

Bump VERSION_MICRO to 99

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Commit

Permalink

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Browse files Browse the repository at this point in the history

dev: add history item : don't play smart-ass on history collapsing

Loading branch information

aurelienpierre

committed Dec 18, 2023

1 parent

bc4a4bd

commit a518023

Showing 1 changed file with 14 additions and 39 deletions .

There are no files selected for viewing

53 changes: 14 additions & 39 deletions

53

src/develop/develop.c

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -697,51 +697,25 @@ static void _dev_add_history_item_ext(dt_develop_t dev, dt_iop_module_t module

}

}

// Check if the current module to append to history is actually the same as the last one in history.

GList * last = g_list_last (dev -> history );

gboolean new_is_old = FALSE;

if (last && last -> data )

{

dt_dev_history_item_t * last_item = (dt_dev_history_item_t * )last -> data ;

dt_iop_module_t * last_module = last_item -> module ;

new_is_old = dt_iop_check_modules_equal (module , last_module );

}

dt_dev_history_item_t * hist = (dt_dev_history_item_t * )calloc (1 , sizeof (dt_dev_history_item_t ));

dt_dev_history_item_t * hist ;

if (force_new_item || !new_is_old )

{

hist = (dt_dev_history_item_t * )calloc (1 , sizeof (dt_dev_history_item_t ));

// Init name

g_strlcpy (hist -> op_name , module -> op , sizeof (hist -> op_name ));

// Init name

g_strlcpy ( hist -> op_name , module -> op , sizeof ( hist -> op_name ) );

// Init buffers

hist -> params = malloc ( module -> params_size );

// Init buffers

hist -> params = malloc (module -> params_size );

// Init base params

hist -> module = module ;

hist -> iop_order = module -> iop_order ;

hist -> multi_priority = module -> multi_priority ;

// Init base params

hist -> module = module ;

hist -> iop_order = module -> iop_order ;

hist -> multi_priority = module -> multi_priority ;

hist -> blend_params = malloc (sizeof (dt_develop_blend_params_t ));

hist -> blend_params = malloc (sizeof (dt_develop_blend_params_t ));

dev -> history = g_list_append (dev -> history , hist );

if (!no_image )

{

dev -> pipe -> changed |= DT_DEV_PIPE_SYNCH ;

dev -> preview_pipe -> changed |= DT_DEV_PIPE_SYNCH ;

}

}

else

if (!no_image )

{

hist = (dt_dev_history_item_t * )last -> data ;

if (!no_image )

{

dev -> pipe -> changed |= DT_DEV_PIPE_TOP_CHANGED ;

dev -> preview_pipe -> changed |= DT_DEV_PIPE_TOP_CHANGED ;

}

dev -> pipe -> changed |= DT_DEV_PIPE_SYNCH ;

dev -> preview_pipe -> changed |= DT_DEV_PIPE_SYNCH ;

}

g_strlcpy (hist -> multi_name , module -> multi_name , sizeof (hist -> multi_name ));

Expand All

@@ -764,6 +738,7 @@ static void _dev_add_history_item_ext(dt_develop_t dev, dt_iop_module_t module

if (enable ) hist -> enabled = module -> enabled = TRUE;

else hist -> enabled = module -> enabled ;

dev -> history = g_list_append (dev -> history , hist );

dev -> history_end = g_list_length (dev -> history );

}

Expand Down

0 comments on commit

Please sign in to comment.

You can’t perform that action at this time.

Bumps actions/download-artifact from 3 to 4.

Sourced from actions/download-artifact's releases .

v4.0.0

What's Changed

The release of upload-artifact@v4 and download-artifact@v4 are major changes to the backend architecture of Artifacts. They have numerous performance and behavioral improvements.

For more information, see the @​actions/artifact documentation.

New Contributors

@​bflad made their first contribution in actions/download-artifact#194

Full Changelog : https://github.com/actions/download-artifact/compare/v3...v4.0.0

v3.0.2

Bump @actions/artifact to v1.1.1 - actions/download-artifact#195

Fixed a bug in Node16 where if an HTTP download finished too quickly (<1ms, e.g. when it's mocked) we attempt to delete a temp file that has not been created yet actions/toolkit#1278

v3.0.1

Bump @​actions/core to 1.10.0

7a1cd32 Merge pull request #246 from actions/v4-beta

8f32874 licensed cache

b5ff844 Merge pull request #245 from actions/robherley/v4-documentation

f07a0f7 Update README.md

7226129 update test workflow to use different artifact names for matrix

ada9446 update docs and bump @​actions/artifact

7eafc8b Merge pull request #244 from actions/robherley/bump-toolkit

3132d12 consume latest toolkit

5be1d38 Merge pull request #243 from actions/robherley/v4-beta-updates

465b526 consume latest @​actions/toolkit

Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


You can trigger Dependabot actions by commenting on this PR: - @dependabot rebase will rebase this PR - @dependabot recreate will recreate this PR, overwriting any edits that have been made to it - @dependabot merge will merge this PR after your CI passes on it - @dependabot squash and merge will squash and merge this PR after your CI passes on it - @dependabot cancel merge will cancel a previously requested merge and block automerging - @dependabot reopen will reopen this PR if it is closed - @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - @dependabot show ignore conditions will show all of the ignore conditions of the specified dependency - @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (130064d) 63.88% compared to head (ba1807e) 63.88%. Report is 6 commits behind head on main.

@@ Coverage Diff @@
## main #2866 +/- ##
=======================================
 Coverage 63.88% 63.88% 
=======================================
 Files 103 103 
 Lines 22369 22369 
 Branches 10865 10865 
=======================================
 Hits 14291 14291 
 Misses 5856 5856 
 Partials 2222 2222 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Bumps actions/upload-artifact from 3 to 4.

Sourced from actions/upload-artifact's releases .

v4.0.0

What's Changed

The release of upload-artifact@v4 and download-artifact@v4 are major changes to the backend architecture of Artifacts. They have numerous performance and behavioral improvements.

For more information, see the @​actions/artifact documentation.

New Contributors

@​vmjoseph made their first contribution in actions/upload-artifact#464

Full Changelog : https://github.com/actions/upload-artifact/compare/v3...v4.0.0

v3.1.3

What's Changed

chore(github): remove trailing whitespaces by @​ljmf00 in actions/upload-artifact#313

Bump @​actions/artifact version to v1.1.2 by @​bethanyj28 in actions/upload-artifact#436

Full Changelog : https://github.com/actions/upload-artifact/compare/v3...v3.1.3

v3.1.2

Update all @actions/* NPM packages to their latest versions- #374

Update all dev dependencies to their most recent versions - #375

v3.1.1

Update actions/core package to latest version to remove set-output deprecation warning #351

v3.1.0

What's Changed

Bump @​actions/artifact to v1.1.0 (actions/upload-artifact#327 )

Adds checksum headers on artifact upload (actions/toolkit#1095 ) (actions/toolkit#1063 )

c7d193f Merge pull request #466 from actions/v4-beta

13131bb licensed cache

4a6c273 Merge branch 'main' into v4-beta

f391bb9 Merge pull request #465 from actions/robherley/v4-documentation

9653d03 Apply suggestions from code review

875b630 add limitations section

ecb2146 add compression example

5e7604f trim some repeated info

d6437d0 naming

1b56155 s/v4-beta/v4/g

Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


You can trigger Dependabot actions by commenting on this PR: - @dependabot rebase will rebase this PR - @dependabot recreate will recreate this PR, overwriting any edits that have been made to it - @dependabot merge will merge this PR after your CI passes on it - @dependabot squash and merge will squash and merge this PR after your CI passes on it - @dependabot cancel merge will cancel a previously requested merge and block automerging - @dependabot reopen will reopen this PR if it is closed - @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - @dependabot show ignore conditions will show all of the ignore conditions of the specified dependency - @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (130064d) 63.88% compared to head (7363c4a) 63.88%. Report is 6 commits behind head on main.

@@ Coverage Diff @@
## main #2865 +/- ##
=======================================
 Coverage 63.88% 63.88% 
=======================================
 Files 103 103 
 Lines 22369 22369 
 Branches 10865 10865 
=======================================
 Hits 14291 14291 
 Misses 5856 5856 
 Partials 2222 2222 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Bumps github/codeql-action from 2 to 3.

Sourced from github/codeql-action's releases .

CodeQL Bundle v2.15.4

Bundles CodeQL CLI v2.15.4

(changelog , release )

Includes the following CodeQL language packs from github/codeql@codeql-cli/v2.15.4 :

codeql/cpp-queries (changelog , source )

codeql/cpp-all (changelog , source )

codeql/csharp-queries (changelog , source )

codeql/csharp-all (changelog , source )

codeql/go-queries (changelog , source )

codeql/go-all (changelog , source )

codeql/java-queries (changelog , source )

codeql/java-all (changelog , source )

codeql/javascript-queries (changelog , source )

codeql/javascript-all (changelog , source )

codeql/python-queries (changelog , source )

codeql/python-all (changelog , source )

codeql/ruby-queries (changelog , source )

codeql/ruby-all (changelog , source )

codeql/swift-queries (changelog , source )

codeql/swift-all (changelog , source )

CodeQL Bundle

Bundles CodeQL CLI v2.15.3

(changelog , release )

Includes the following CodeQL language packs from github/codeql@codeql-cli/v2.15.3 :

codeql/cpp-queries (changelog , source )

codeql/cpp-all (changelog , source )

codeql/csharp-queries (changelog , source )

codeql/csharp-all (changelog , source )

codeql/go-queries (changelog , source )

codeql/go-all (changelog , source )

codeql/java-queries (changelog , source )

codeql/java-all (changelog , source )

codeql/javascript-queries (changelog , source )

codeql/javascript-all (changelog , source )

codeql/python-queries (changelog , source )

codeql/python-all (changelog , source )

codeql/ruby-queries (changelog , source )

codeql/ruby-all (changelog , source )

codeql/swift-queries (changelog , source )

codeql/swift-all (changelog , source )

CodeQL Bundle

Bundles CodeQL CLI v2.15.2

(changelog , release )

Includes the following CodeQL language packs from github/codeql@codeql-cli/v2.15.2 :

codeql/cpp-queries (changelog , source )

... (truncated)

Sourced from github/codeql-action's changelog .

3a9f6a8 update javascript files

cc4fead update version in various hardcoded locations

183559c Merge branch 'main' into update-bundle/codeql-bundle-v2.15.4

5b52b36 reintroduce PR check that confirm action can be still be compiled on node16

5b19bef change to node20 for all actions

f2d0c2e upgrade node type definitions

d651fbc change to node20 for all actions

382a50a Merge pull request #2021 from github/mergeback/v2.22.9-to-main-c0d1daa7

458b422 Update checked-in dependencies

5e0f9db Update changelog and version after v2.22.9

See full diff in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


You can trigger Dependabot actions by commenting on this PR: - @dependabot rebase will rebase this PR - @dependabot recreate will recreate this PR, overwriting any edits that have been made to it - @dependabot merge will merge this PR after your CI passes on it - @dependabot squash and merge will squash and merge this PR after your CI passes on it - @dependabot cancel merge will cancel a previously requested merge and block automerging - @dependabot reopen will reopen this PR if it is closed - @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - @dependabot show ignore conditions will show all of the ignore conditions of the specified dependency - @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (130064d) 63.88% compared to head (144c5b9) 63.88%. Report is 6 commits behind head on main.

@@ Coverage Diff @@
## main #2864 +/- ##
=======================================
 Coverage 63.88% 63.88% 
=======================================
 Files 103 103 
 Lines 22369 22369 
 Branches 10865 10865 
=======================================
 Hits 14291 14291 
 Misses 5856 5856 
 Partials 2222 2222 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

build.sh: check if file exist before creating .desktop simlink

When building on Windows, the Ansel's .desktop file is not generated and the script get stuck when trying to create a simlink to it.

Description of the bug

When copy pasting a development to a picture, the pasting is correctly applied but any of the central view, histograms or thubnail are refreshing.

This happens only in darkroom.

To Reproduce

  1. Go to darkroom
  2. Copy a development
  3. Past onto a picture
  4. See nothing have refreshed

Expected behavior

Ansel should recompute the pipe caches

System

  • Ansel version : at least 53c609c
  • OS : Win11 22H2
  • Memory :
  • Graphics card :
  • Graphics driver :
  • OpenCL installed :
  • OpenCL activated :
  • Xorg :
  • Desktop :
  • GTK+ :
  • gcc :
  • cflags :

Additional context

  • Can you reproduce with another darktable version(s)? yes
  • Can you reproduce with a RAW or Jpeg or both? both
  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes
  • Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? yes
  • Do you use lua scripts? no

Ansel should recompute the pipe caches

It's the thumb cache affected here, the pipe cache affects darkroom only

the pipe cache affects darkroom only

Not sure what to think about that but, as I said, this happens only in darkroom.

Do you mean that copy/past development is not supposed to recompute the pipe caches yet because it was something available in lighttable only ? Then this report would be an enhancement.

Guided filter still produces NaN (then black squares) in high ISO pictures. Can't figure out why, mitigation is to raise the black point.

Quality Gate Passed Quality Gate passed

The SonarCloud Quality Gate passed, but some issues were introduced.

92 New issues 0 Security Hotspots No data about Coverage 2.4% Duplication on New Code

See analysis details on SonarCloud

Thank you !

We need to remove all stdout in several openMP dt_omp_firstprivate() function first to be able to compile coloreqal.c 😁

(see https://github.com/aurelienpierreeng/ansel/actions/runs/7250867982/job/19752002681?pr=283#step:4:1292 )

Removed my comments...

Merge pull request #2863 from Gitoffthelawn/patch-3

Update links to HTTPS where possible

This seems like a minor issue, perhaps it's just the documentation which needs updating... Right clicking the gamut and softproof buttons at the bottom of the darkroom view doesn't give me a choice of screen profile, just the softproof profile. I have little CMS ticked in Image Processing preferences.

However setting screen profile seems to work using Display in the main menu.

This is on version 06d9cda, I can't run newer ones due to GCC12 etc.

Screen profile moved to the global menu "Display".

Looks like you're not interested in documentation issues?

WHAT DOES THIS DO?

When building on Windows, the Ansel's .desktop file is not generated and the script get stuck when trying to create a simlink to it with the message: ln: failed to create symbolic link '/usr/share/applications/ansel.desktop': No such file or directory

I added a check on the file existence before the script creates the simlink.

Quality Gate Failed Quality Gate failed

Failed conditions

4.7% Duplication on New Code (required ≤ 3%)

See analysis details on SonarCloud

Review these changes using an interactive CodeSee Map

Legend

Please rebase.

How do I accomplish that task? Can you advise?

Because the changes are minor, I can obviously just edit the current main branch and implement the same changes, but I would like to learn how to do what you describe (for situations where the changes are not so minor).

How do I accomplish that task? Can you advise?

You can try the suggested links under the "This branch has conflicts..." below.

Thanks. It wound up requiring manual editing. It surprises me that GitHub doesn't handle this situation more efficiently. But no big deal for small changes.

It wound up requiring manual editing.

Yep, usually does when there is a conflict, e.g. a change to the same line in two different commits. How do you know which one is meant to be the "latest"? I could've merged this one first and then the other one would've conflicted... So an explicit resolution is required.

Thank you. What surprised me is that one of the changes was on its own line, but GitHub complained. Any idea why?

Description of the bug

Ctrl + mouse and Shift + mouse, used to modify values in sliders by small or large amount, don't work since release 57ed58d. Hence, it's impossible to specify exact values in, say, watermark positions.

To Reproduce

Start Ansel 57ed58d or newer, then try to use Ctrl + mouse or Shift + mouse in any slider.

Expected behavior

Ctrl + mouse should modify slider values by small amounts, while Shift + mouse by large amounts. As per documentation.

Context

Screenshots

Screencast

Which commit introduced the error

Ansel 57ed58d

System

  • darktable version : https://github.com/aurelienpierreeng/ansel/commit/702c4c5f0a84f1972b9bf0d90a6250d6850545fe
  • OS : Linux 5.15.0-89-generic
  • Linux - Distro : Linux Mint 21.2 (VIctoria)
  • Memory : 8GB
  • Graphics card : NVIDIA GeForce 920M
  • Graphics driver : nvidia-driver-470
  • OpenCL installed : unknown
  • OpenCL activated : unknown
  • Xorg : 1:7.7+23ubuntu2
  • Desktop : Mate
  • GTK+ : unknown
  • gcc : unknown
  • cflags : unknown
  • CMAKE_BUILD_TYPE : unknown

Additional context

  • Can you reproduce with another darktable version(s)? **yes with all versions newer than 57ed58d
  • Can you reproduce with a RAW or Jpeg or both? both
  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes
  • If the issue is with the output image, attach an XMP file if (you'll have to change the extension to .txt)
  • Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? yes
  • Do you use lua scripts? no
  • What lua scripts start automatically?
  • What lua scripts were running when the bug occurred?

I think it's made on purpose

If by "Ctrl+mouse" you mean Ctrl+mouse wheel, it works here. Shift+scroll too. The difference is you have to capture the scrolling focus by clicking the slider first (or its label).

I confirm that it works. I missed the change from Ctrl + mouse click and movement to Ctrl + mouse wheel. Thanks.

Description of the bug

Releases equal to or newer than 702c4c5 make the Style panel disappear, and there's apparently no way to get the Styles back. I "solved" by rolling back to release Ansel f7669af, which is the last release unaffected from another bug I'll report (Ctrl + mouse and Shift + mouse not working). I also had to remove $HOME/.config/ansel. Very bad bug. Good thing I had a backup of my styles.

To Reproduce

Start Ansel 702c4c5 or newer; the Style panel is gone.

Expected behavior

The Style panel should not vanish.

Context

Screenshots

Screencast

Which commit introduced the error

Apparently, Ansel 702c4c5

System

  • darktable version : 702c4c5
  • OS : Linux 5.15.0-89-generic
  • Linux - Distro : Linux Mint 21.2 (VIctoria)
  • Memory : 8GB
  • Graphics card : NVIDIA GeForce 920M
  • Graphics driver : nvidia-driver-470
  • OpenCL installed : unknown
  • OpenCL activated : unknown
  • Xorg : 1:7.7+23ubuntu2
  • Desktop : Mate
  • GTK+ : unknown
  • gcc : unknown
  • cflags : unknown
  • CMAKE_BUILD_TYPE : unknown Ansel 702c4c5 - BUG Ansel f7669af OK

Additional context

  • Can you reproduce with another darktable version(s)? yes with version x-y-z / no
  • Can you reproduce with a RAW or Jpeg or both? RAW-file-format/Jpeg/both
  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes/no
  • If the issue is with the output image, attach an XMP file if (you'll have to change the extension to .txt)
  • Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? yes/no
  • Do you use lua scripts?
  • What lua scripts start automatically?
  • What lua scripts were running when the bug occurred?

Styles are temporarily unavailble until the history read-write is made thread-safe. They will re-appear in global menu.

Fine, thanks.

Part of #262. Closing here.

Description of the bug

In 'Open from disk...' modal, with english UI there are three check-box without labels, with Italian translation some internals appears.

To Reproduce

  1. Click on 'Open from disk...'

Expected behavior

Idk

Context

Screenshots English: immagine

Italian: immagine

System

  • darktable version : 0.0.0+627~g702c4c5f0
  • OS : Linux - kernel 6.2.0-39
  • Linux - Distro : Ubuntu 22.4

I think you are using an old commit 🤔

I think so too. Try using the latest app image

No I compiled with latest commit on master (I did it again now after a clean of the build directory to be sure): 0.0.0+627~g702c4c5f0

I confirm that in the AppImage the UI is different and the import from File-> is working. I try to clone it again... But it's strange.

Ok, my fault, I don't know after the pull and the clean of the directory it wasn't upgraded.. Next time I will check it once more than twice.

Particularly if you select orientation "Landscape", and your picture is in portrait orientation. Likely the opposite is true.

It also happens if you select a square aspect ratio and increase the border. At some point it becomes a rectangle

Fixed aspect ratio is intended to be used with mouse bounding box selection over preview.

The sliders allowing to set the bounding box from the margins have been added 9 years after or so. The were meant for printing cases where precise margins are expected.

I'm not sure what the proper behaviour should be if margins settings conflict with aspect ratio. Actually, I'm not even sure it makes sense to define a crop from margins if a fixed ratio is selected.

MD5: add more exhaustive test coverage

This is only meant to ensure that the various lengths are tested.

https://godbolt.org/z/jd53aosK3

I've been using the Ansel Appimage for a while with Ubuntu 20.04. Currently appimage Ansel-06d9cda-x86_64 from about a month ago. This is still working. Today I downloaded the latest, Ansel-3cf785d-x86_64, but it won't run. I've set it to allow execution in Permissions. When I double-click it, nothing happens. I downloaded an older one, several days old, that was the same. If I re-download my current one, 06d9cda, that works. Any ideas please?!

I just hit the same issue on Ubuntu 20.04. It appears that Ansel-3cf785d-x86_64 relies on a minimum glibC version of 2.32, whereas Ubuntu 20.04 ships 2.31. Without tracking down the reason for the change, the options I see are: 1. Use older build 2. Build latest locally 3. Upgrade to Ubuntu 22.04, which uses a newer glibC (I intend to do this soon anyway)

My hands are tied here. Rawspeed needs at least GCC 12, which is not available on Ubuntu 20.04.

I understand these things to only a limited degree but I thought the whole point of appimages, flatpaks, was to avoid version clashes and build problems. I.e. appimage has GCC12 within it.

I understand these things to only a limited degree but I thought the whole point of appimages, flatpaks, was to avoid version clashes and build problems. I.e. appimage has GCC12 within it.

I'm not an expert and have not used appimages very much but it seems like, according to this page, appimages need to be built targeting the oldest base system you intend to run your software on.

I believe a Flatpak would work as you expect it would due to the sandboxing and runtime isolation (i.e. an old distro should be able to run a Flatpak running on a more recent runtime).

add_history_item: alloc and copy blendop_params even for modules don't supporting blending.

I'm too lazy to track all the places where if(module->flags() & IOP_FLAGS_SUPPORTS_BLENDING) should be added in the app before messing with blendop.

Description of the bug

Just like the title.

To Reproduce

  1. Go to darkroom.
  2. Click on crop module.
  3. Click "Edit".
  4. Click the arrow button next to ratio to change orientation.
  5. Click "Apply"

Expected behavior

Photo is cropped.

Context

Screenshots

Screencast

This is the CLI output, it looks like some thread locks error:

ansel-crop-crash.txt

The program says it writes backtrace to /tmp, but the file is empty, so I assume if I start it from CLI it will just output to terminal.

Which commit introduced the error

I am using .

The last commit I tried without this issue (but the crop orientation still doesn't work correctly) is .

System

  • darktable version : 0.0.0.r640.g0a2bf45ef
  • OS : Linux 6.6.6
  • Linux - Distro : Arch Linux
  • Memory : 32G
  • Graphics card : RTX2070S
  • Graphics driver : nvidia
  • OpenCL installed : Y
  • OpenCL activated : Y
  • Xorg : Wayland
  • Desktop : GNOME
  • GTK+ : 3.24.38
  • gcc : 13.2.1
  • cflags : -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection
  • CMAKE_BUILD_TYPE : Release

Additional context

  • Can you reproduce with another darktable version(s)? yes with version x-y-z / no
  • Can you reproduce with a RAW or Jpeg or both? RAW-file-format/Jpeg/both
  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes/no
  • If the issue is with the output image, attach an XMP file if (you'll have to change the extension to .txt)
  • Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? yes/no
  • Do you use lua scripts?
  • What lua scripts start automatically?
  • What lua scripts were running when the bug occurred?

ansel-crop-crash-debug.txt

I get GDB output of a debug build. Not sure why backtrace is not written to the backtrace file in /tmp for me, but that's not important.

Fixed by 53c609c

Thanks!

Temperature.c : remove SSE2 path entirely

There is a bug in the only codepath that doesn't actually fallback to plain C code. Don't bother.

Develop: remove focus_hash

It's not a hash and GUI stuff has no business in history <-> pipeline interactions.

Dev: refactor add_history_item

Factorize code, remove GUI interaction.

WARNING: enabling disabled modules in GUI and invalidating preview pipes if any are now the responsibility of the calling functions. Keep GUI stuff in GUI code.

Detach a re-usable dt_iop_module_hash function

Will be used for history too.

Module hash : rework the logic to avoid memcpy

Work in-place.

Develop: refactor add_default_modules

The loops can be merged.

Pipeline: deal with mipmap cache at the pipeline level

Less finicky to work with, albeit less efficient

Fixed Bash script tools

Changed darktables config & exec into Ansel

CMake: Fix build check on AVIF exporter

If you read too much CMake, you need to have a break, otherwise you are very likely to make mistakes. Use the correct variable to fix it.

If you read too much CMake, you need to have a break, otherwise you are very likely to make mistakes. Use the correct variable to fix it.

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage No data about Duplication

See analysis details on SonarCloud

Since you are working on the crop module, I went over the aspect ratios, organized them, added a few more, fixed some errors, capitalization, and added some comments. Maybe not all are necessary, but the dimensions are not made up. They new additions come from the sizes Ilford makes their sheet film in. Also fixed the cinema aspect ratios.

  • Freehand
  • Original image
  • Square (1x1, 6x6, 8x8, 14x14) = 1
  • 6x7 = 1.167
  • 5x6 (10x8 in print,10x12, 20x24) = 1.2 (8x10 wrong in ansel preset. Its ratio is 1.2 instead of 1.25. Also, I have no idea what in print means)
  • 14x17 =1.214
  • 4x5 (12x15, 16x20) = 1.25
  • 11x14 = 1.273
  • 8.5x11 (US letter) = 1.294 (Unsure of the need for this one. Very close to the next one, and not a film aspect ratio)
  • 3.25x4.25 (6.5x8.5) = 1.307
  • 4:3 (18x24, VGA, TV) = 1.333 (Maybe remove VGA and TV?)
  • 4.75x6.5 = 1.368
  • Academy ratio = 1.37
  • 6.5x9 (13x18) = 1.385
  • 5x7 = 1.4
  • A4 (ISO 216, DIN476) = 1.41 (maybe remove ISO 216, DIN476, since it's the same thing)__(Unsure of the need for this one. Very close to the previous and the next one, and not a film aspect ratio)
  • 14x20 = 1.428
  • 2.25x3.25 = 1.445
  • 3:2 (35mm film, 4x6, 6x9) = 1.5
  • 7x11 = 1.571
  • 5x8 (10x16) = 1.6
  • Golden ratio = 1.618 (changed it from "golden cut". Unsure if necessary, since it's really close to the previous one, and it's not a film aspect ratio, but alright)
  • 3x5 (12x20) = 1.667
  • 8.5x15 = 1.765
  • 16:9 (HDTV) = 1.778
  • Normal widescreen = 1.85
  • 2:1 (8x16, univisium) = 2 (I would take out univisium. It has not been adopted, so nobody outside movie nerds will know about it)
  • 9x21 = 2.334 (wrong in Ansel preset)
  • Cinemascope 3 = 2.35 (Cinemascope 1-3 numbered in chronological order)
  • Anamorphic widescreen = 2.39
  • 5x12 = 2.4
  • 7x17 = 2.428
  • 2x5 (4x10, 8x20) = 2.5
  • Cinemascope 2 = 2.55
  • Cinemascope 1 = 2.66
  • 24x65 (Xpan) = 2.708
  • Ultra-Panavision (70mm, IMAX) = 2.76
  • Panorama = 3
  • Polyvision = 4

This is getting a bit out of hand though, I'm not sure the combobox would fit in height for smaller screens.

During work on the crop module, I discovered there is an hidden preference in .config/ansel/anselrc called plugins/darkroom/clipping/extra_aspect_ratios. You can add as many as you want, separated by /, with ratios defined as width:height.

Thanks for the tip! I will use it.

The list is clearly too long. I just wanted to fix mistakes in the current one, and perhaps add a few new relevant ones. This is about having sane (and accurate) presets. Perhaps you disagree, but I would take out US Letter, A4, golden ratio, and all the cine aspect ratios. The cine ones are not printed in that aspect ratio, but projected through anamorphic lenses. That leaves us with 6 more crop presets.

This is a photo editing app, after all. Having presets based on actual film dimensions I think is the right choice:

  • Freehand
  • Original image
  • Square (1x1, 6x6, 8x8, 14x14) = 1
  • 6x7 = 1.167
  • 5x6 (8x10,10x12, 20x24) = 1.2
  • 14x17 =1.214
  • 4x5 (12x15, 16x20) = 1.25
  • 11x14 = 1.273
  • 3.25x4.25 (6.5x8.5) = 1.307
  • 4:3 (18x24) = 1.333
  • 4.75x6.5 = 1.368
  • 6.5x9 (13x18) = 1.385
  • 5x7 = 1.4
  • 14x20 = 1.428
  • 2.25x3.25 = 1.445
  • 3:2 (35mm film, 4x6, 6x9) = 1.5
  • 7x11 = 1.571
  • 5x8 (10x16) = 1.6
  • 3x5 (12x20) = 1.667
  • 8.5x15 = 1.765
  • 2:1 (8x16) = 2
  • 9x21 = 2.334 (wrong in Ansel preset)
  • 5x12 = 2.4
  • 7x17 = 2.428
  • 2x5 (4x10, 8x20) = 2.5
  • 24x65 (Xpan) = 2.708

Would it not be a good idea to be able to input your own custom ratio?

Would it not be a good idea to be able to input your own custom ratio?

It has been possible to do so since forever.

Deprecated warnings in test still exists

Bumed c++ to 14 Added new compile flags for linux: -Wall -pedantic -O2 -Wextra -fexceptions

Removed unused parameters, not a good solution but trying to add the -Werror parameter as a build flag. Initialized all parameters (in tests)

Remove deprecated function calls. Bump VERSION_MICRO to 99

I don't think we need the deprecated function calls any more. I think we should be able to remove them from the master branch. The projects that are dependent on lensfun should have moved to the newer api calls instead (because they function calls have been deprecated for a while now :) ).

Remove compile warnings

Deprecated warnings in test still exists

Changed darktables config/bin/ref into Ansel

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage No data about Duplication

See analysis details on SonarCloud

Thanks

The first 5 bytes of the CanonLe block give the serial number when converted to hexadecimal. This PR also fixes bug #2138 as it stops the 30 byte length of the block being truncated to a multiple of 4 bytes.

Review these changes using an interactive CodeSee Map

Legend

@norbertwg Can you please take a look as well as you've recently worked on related stuff?

Looking at the results for all of exiv2's reference files that have Exif.CanonLe.LensSerialNumber, I see that most have 30 bytes in the CanonLe block, but some have 28 bytes, and the first byte is not part of a serial number but the value 28. Decoding the serial number may not be as simple as I have assumed.

Edit: writing one of my Canon files with libexiv2 v0.28.1 truncates the block from 30 bytes to 28 and overwrites the first four bytes with 28 0 0 0 so these may be files changed by exiv2.

Codecov Report

Attention: 2 lines in your changes are missing coverage. Please review.

Comparison is base (130064d) 63.88% compared to head (03b49ca) 63.89%.

Files Patch % Lines
src/canonmn_int.cpp 83.33% 1 Missing and 1 partial :warning:
@@ Coverage Diff @@
## main #2860 +/- ##
==========================================
+ Coverage 63.88% 63.89% +0.01% 
==========================================
 Files 103 103 
 Lines 22369 22381 +12 
 Branches 10865 10872 +7 
==========================================
+ Hits 14291 14301 +10 
- Misses 5856 5857 +1 
- Partials 2222 2223 +1 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

I was working on Exif.Canon.SerialNumber (which is body's serial number) and used Canon Photo Digital Professional 4 as reference. The Canon program does not show lens serial number, so it cannot be used as reference here. A finding of my investigations was, that serial number of EOS D30 is printed as combination of hex and decimal, for all other bodies, which I could check, it is printed only as decimal. It might be the case, that Canon once decided to change the format also for lens serial number.

@jim-easterbrook Do you have some documentation about format of Canon's serial number, for lens as well for body?

Sorry, no documentation at all. Just a chance discovery when looking at the CanonLe data in hexadecimal and comparing it with Exif.Photo.LensSerialNumber. Very few of the images in Exiv2's test suite have anything else in the CanonLe block.

If found several images, where Exif.Canon.SerialNumber was different from Exif.Photo.BodySerialNumber, so we might have a similar problem with lens serial number. I have collected some other images and will have a look on them.

Everything seems to have two serial numbers anyway... My EOS 100D has the same number in Exif.Photo.BodySerialNumber as printed on the bottom of the camera (12 decimal digits) but reports a different, hexadecimal, number via gphoto2. (No, it doesn't decode to the same decimal number.) I don't see Exif.Canon.SerialNumber in its exiv2 tags.

I checked Exif.CanonLe.LensSerialNumber and Exif.Photo.LensSerialNumber in my set of test files. A few files had the CanonLe-tag filled where the Photo-tag was empty. Some files had the CanonLe-tag filled where the Photo-tag contained a zero. In all other cases the values matched. So this suggested change makes sense.

Several Canon bodies seem not to fill Exif.Canon.SerialNumber; at least the test images I had from them did not contain an entry (which theoretically could have been deleted before uploading them for common access).

There is another tag for serial number: Canon.InternalSerialNumber. Perhaps this is the one gphoto2 shows. In my samples its value is completely different from the values in Exif.Canon.SerialNumber or Exif.Photo.BodySerialNumber.

My EOS 100D has Canon.InternalSerialNumber - its value is HA0577275. The gphoto2 camera serial number is 3257e3b, which appears to be completely unrelated.

Please let me know if there is anything else I need to do to advance this PR.

From my point of view there is nothing left to avance.

Should this be backported to 0.28.x?

I would say yes. I made the same changes to the source files on the 0.28.x branch (on my local machine) and it all seems to work.

@Mergifyio backport 0.28.x

Well, that didn't work... 🤷

Orientation Icon function not responding

Windows11

702c4c5

or is there a very specific order of steps to make it work? It seems to function once in a while

imageio: Add WebP importer based on GraphicsMagick

Mostly ported from darktable-org/darktable. GraphicsMagick could also import WebP and it's easy to implement.

imageio: Add WebP importer based on ImageMagick

Mostly ported from darktable-org/darktable. ImageMagick could also import WebP and it's easy to implement. This also allows ImageMagick to read image profile.

imageio: Increase compatibility of exported sRGB AVIF images

Mostly ported from darktable-org/darktable. Refer to , BT601 and BT401BG here are identical for libavif.

imageio: Accept more MC for CICP encoded P3 spaces

There is no direct mapping for P3 with MC, so those values are all not wrong, see .

imageio: Add fix for mistagged legacy AVIF images in HEIF importer

libheif could also handle AVIF, this adds the same fix we have in AVIF importer.

imageio: Move image flags settings to each importer file

This makes easier to call importers, and removes a lot of redundant code that already done in importers.

imageio: Fix HDR/LDR detection in HEIF/AVIF importers

Those files support both HDR and LDR, while we typically treat them as HDR, we need to set them to LDR if detected.

imageio: Stop correcting sRGB AVIF images with MC=BT709

It seems this is not a wrong tag.

imageio: Use original bit depth to detect HDR/LDR in HEIF importer

Mostly ported from darktable-org/darktable, the original PR has full explanation about why we need this, see .

colorspaces: Add Display P3 support

As well as AVIF exporter.

imageio: Try to read AVIF/HEIF ICC profile on import

Mostly ported from darktable-org/darktable.

imageio: Add WebP importer based on libwebp

Mostly ported from darktable-org/darktable. Because we already export webp with libwebp, this won't add new dependency. Also find libwebpmux in CMake module.

debug: Add DT_DEBUG_TRACE_WRAPPER()

With some black magic/macros we could trace a function without modifying all callers. But what must we give it return?

This will only show when you run it in verbose mode (for example, ansel -d verbose -d dev), so you can skip them when you only want to check normal debug outputs.

develop: Use DT_DEBUG_TRACE_WRAPPER()

This rewrites different debug helpers added in 786d31c191, 2ed0c356fc and 92b3b71b30 with DT_DEBUG_TRACE_WRAPPER() so we don't need to pass debug variables explicitly via arguments.

The final part of https://github.com/lensfun/lensfun/issues/2131 (fixes #2131)

Only merge after https://github.com/lensfun/lensfun.github.io/pull/9!

Replaces the Sourceforge source with "lensfun.github.io" (https://github.com/lensfun/lensfun/issues/2131#issuecomment-1848921025)

Preparation for https://github.com/lensfun/lensfun/issues/2131.

tools/update_database/follow_db_changes.py currently generates the files for updates and a few other things. For hosting the files on lensfun.github.io we need a tool that just generates the files. Therefore most of the generation and conversion of the files is moved to tools/update_database/xml_converter.py and used in follow_db_changes.py and the new tools/update_database/generate_db.py.

tools/lensfun_convert_db_v2_to_v1.py, used for making 0.3.x releases already did most of what generate_db.py does, but not everything needed. Now that we have generate_db.py it can be removed.

Edit: GitHub thinks lensfun_convert_db_v2_to_v1.py was moved to xml_converter.py, actually the code is copied from follow_db_changes.py.

Provide a single way to trace a function without modify all callers.

Quality Gate Passed Quality Gate passed

The SonarCloud Quality Gate passed, but some issues were introduced.

1 New issue 0 Security Hotspots No data about Coverage 0.0% Duplication on New Code

See analysis details on SonarCloud

Thanks !

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

Unable to clear history #269

Open

kred opened this issue Dec 13, 2023 · 5 comments

Open

Unable to clear history

269

kred opened this issue Dec 13, 2023 · 5 comments

Labels

priority: critical

Affects basic and core functionnalities of the software in a way that prevents it to work at all

regression

Comments

Copy link

kred

commented

Dec 13, 2023

Description of the bug

I'm unable to clear history of modifications in Darkroom

To Reproduce

Select image that has some edits and switch to Darkroom

Click on 'delete image's history' button

Click 'yes' in the confirmation dialog

There is no change in history stack and image

Expected behavior

Image history is restored to default.

Context

Screenshots

Screencast

Which commit introduced the error

System

darktable version : 2149027

OS : Win11

Linux - Distro : e.g. Ubuntu 18.4

Memory : 32GB

Graphics card : Intel A770 16GB

Graphics driver :

OpenCL installed :

OpenCL activated :

Xorg :

Desktop :

GTK+ :

gcc :

cflags :

CMAKE_BUILD_TYPE :

Additional context

Can you reproduce with another darktable version(s)? yes with version x-y-z / no

Can you reproduce with a RAW or Jpeg or both? RAW-file-format/Jpeg/both

Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes/no

If the issue is with the output image, attach an XMP file if (you'll have to change the extension to )

Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? yes/no

Do you use lua scripts?

What lua scripts start automatically?

What lua scripts were running when the bug occurred?

The text was updated successfully, but these errors were encountered:

Copy link

Author

kred

commented

Dec 13, 2023

It is recent regression because it worked in 7472fae .

pedrorrodriguez

added

regression

priority: critical

Affects basic and core functionnalities of the software in a way that prevents it to work at all

labels

Dec 13, 2023

Copy link

Collaborator

aurelienpierre

commented

Dec 13, 2023

That's actually a good thing because this button is meant to be used in lighttable with a cold history (as opposed to darkroom where the history is "hot" because loaded in the pipeline)

Copy link

Author

kred

commented

Dec 14, 2023

So there is another bug then - 'hot' history stack is cleared, but preview in Darkroom is not refreshed.

Now the only way to reset all settings in Darkroom is clicking Edit->Delete development only, right?

Copy link

Collaborator

pedrorrodriguez

commented

Dec 14, 2023

Wontfix? Do we need to delete an image's development from the lighttable now?

Copy link

Collaborator

Jiyone

commented

Dec 20, 2023

I wonder if this BR should be closed now? 🤔 I can reset history on my side.

Sign up for free to join this conversation on GitHub . Already have an account? Sign in to comment

Labels

priority: critical

Affects basic and core functionnalities of the software in a way that prevents it to work at all

regression

4 participants

You can’t perform that action at this time.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

image info: Support backslash escaped characters #268

Closed

AlynxZhou

wants to merge 1 commit into

aurelienpierreeng :master

from

AlynxZhou :escaped-info

Closed

image info: Support backslash escaped characters

268

AlynxZhou

wants to merge 1 commit into

aurelienpierreeng :master

from

AlynxZhou :escaped-info

Conversation

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Copy link

AlynxZhou

commented

Dec 13, 2023

This allows user to use escaped characters in image info, for example, use to break lines. Note that already uses backslash to escape characters, so in order to break lines, you need to use in preferences.

Copy link

sonarcloud bot

commented

Dec 13, 2023

Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage 0.0% Duplication on New Code

See analysis details on SonarCloud

Copy link

Author

AlynxZhou

commented

Dec 13, 2023

Closed because @Jiyone tells me we have for newline in variables expanding and it is documented.

AlynxZhou

closed this

Dec 13, 2023

Copy link

Collaborator

Jiyone

commented

Dec 14, 2023

😎

Sign up for free to join this conversation on GitHub . Already have an account? Sign in to comment

Labels

None yet

2 participants

Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.

You can’t perform that action at this time.

Darkroom: implement history auto-save on DB and XMP (if enabled)

Save every 30 s if history changed and we have less than 200 masks states (aka history is small enough to be saved in foreground).

Fix undo/redo in darkroom

Send HISTORY_UPDATE signal to history module without breaking undo/redo

gui_update : history mutex should now be safe

Found the responsible for dead lock in 3a17fc3c5a39f4c2b48a4594947c5002884584a6

Lua: change target scripts repo to our own

Needs small changes to make it work with Ansel.

Debug strings: fix long long int format on GCC

Clang and GCC disagree on how to treat uint64_t

CMake: Drop run-from-build-dir

In order to support run-from-build-dir0, what basically did is "manually do what will be done during installation in the build time", that contains a mixture of change binary_dir of "change binary_dir of subdirectories to pretend it as the same structure of install dir" and "manually copy files to build dir to pretend it as the same structure of install dir". This changes the structure of build dir, adds a lot of variables into CMake files, and often use variables wrongly.

Because this makes CMake files hard to read (the variables are misleading), and running desktop apps directly from build dir is not a recommended way to user, this commit will drop it.

User should always install it before running it, it is easy to install to where a normal user has permission by changing CMAKE_INSTALL_PREFIX.

Loading resources and plugins relatively is kept so the app is still relocatable.

desktop entry: Only use absolute path when not installing to /usr

If we are installing to /usr, use general name for binary and icon, so system could find the correct files for us, otherwise use absolute path to binary and icon.

CMake: Clear RPATH mess

Use PLUGINDIR for $LIBDIR/ansel, because this is the name used in function name, and then we could use LIBDIR for the system lib dir.

Install libansel.so to LIBDIR, this allows us to totally disable RPATH with CMAKE_SKIP_RPATH when it will be installed as system software because we only have other libraries linked to it.

Correct RPATH for all binaries and libraries, because $ORIGIN is current libraries dir.

Rename plugindir to moduledir

To match the command line argument.

We are still mixing module/plugin a lot, to keep compatibility we'd better not change them (for example plugin_name is part of module API in Lua), so this commit only contains changes about the module dir.

desktop entry: Rename ansel.desktop.in to photos.ansel.app.desktop.in

Make it the same name of the output one.

Pixelpipe: individual hash: take pipeline params into account.

We need both the user-defined params (module->params) and the pipeline params (piece->data), because some pipeline params may be defined or sanitized at runtime (color profiles), but some pipeline params are allocated on the stack (LUTs) from user params (graph nodes), meaning they are not written in piece->data struct.

Using both module->params and piece->data is the only way to accurately represent the internal state of a module, provided process() methods don't do anything dynamic at runtime.

Exposure: do not add history step in gui_update

Unneeded and breaks history mutex

Mask manager: ensure the dummy module is disabled

Creating a pipeline node for the sole purpose of mapping mask entries in history was a really shitty idea. Letting it be enabled to perform useless pixel copy and capture a cache line deserves an IGNobel prize.

I'll take those 50 ms, you morons.

Crop: make user interaction uniform with the rest of the app

  • focusing the module does not start editing mode
  • editing mode is started only on clicking on the button
  • validation is done explicitly too and exists the edit mode
  • intermediate framings are not commited to pipeline and don't trigger recomputes
  • only final history commit triggers the pipeline recompute.

Import.c: declare function ahead

Please static analysis

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 0 Code Smells

No Coverage information No Coverage information No Duplication information No Duplication information

Thanks !

Import : make the recursive file detection able to sustain huge loads

  • Move the recursive file crawler to a non-GUI thread,
  • Have the file counter updated asynchronously with signals, every second,
  • Optimize the file recursion to avoid traversing the list twice.

Tested on directory containing up to 56 000 files, traverses everything in a matter of a few seconds.

Scripts were looking for darktable bins and dirs. Changed for ansel.

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 0 Code Smells

No Coverage information No Coverage information No Duplication information No Duplication information

Please create a portable version of Ansel. The program parameters should be written in an .ini file placed next to the main program. Thanks.

I don't understand the problem to solve nor the scope of the solution expected. I also don't have time to package all possible flavours of application bundles around.

Merge remote-tracking branch 'upstream/pr/583' into develop

  • upstream/pr/583: Canon PowerShot SX220 HS (CHDK) support

Revert "Cr2Decoder: read black/white levels from ColorData4 ver.3"

After further analysis, it does not look like either one of these is SpecularWhiteLevel for that camera. Probably it simply isn't there.

https://discuss.pixls.us/t/any-canon-eos-40d-users-around-a-bit-of-testing-wanted/40843 https://github.com/exiftool/exiftool/issues/233

This reverts commit bd9074ac1181af300658d3dae4baf286e31d8bed. This reverts commit ccb06f6c67c51228544290459e00bb2b71e59804.

cmake: restore a "exiv2lib" target

Commit a8c3455e5cd7ee65acc5f398581e1386f7df5108 and commit eb05551ed2d21079299f2f4da2f463df6857b884 changed the target of the exiv2 library ("exiv2lib"), exporting it in the "Exiv2" namespace, so making it usable as "Exiv2::exiv2lib" instead. An ALIAS to "exiv2lib" was added, however cmake does not install or export ALIAS targets [1].

Hence, restore compatibility with the existing cmake users of exiv2: manually create an ALIAS target in the cmake config files after all the targets are loaded and checked.

[1] https://cmake.org/cmake/help/latest/command/add_library.html

DB thread lock: remove it.

Not used anymore. The image cache has its own thread lock, this should be reused if needed for DB interactions.

When you have a snapshot enabled and select a colour picker or similar, you cannot click on the image to select a colour or draw box etc, as the snapshot will be selected instead. This requires you to deselect the snapshot each time you want to make the change.

I believe it would be better if when a colour picker or similar is selected the snapshot function is not given priority until that picker is no longer active. This would mean making changes then comparing with a snapshot more efficient.

yep

Description of the bug

Hi! Recent changes (55d8a7a build) broke applying style in Darkroom mode - in Lighttable it works OK. Previous builds (like from 3-4 days ago) were fine.

To Reproduce

  1. Create style
  2. Switch to Darkroom
  3. Apply style
  4. Application is not responsive
  5. After restart, style is applied to photo, editing works fine

Expected behavior

Style is correctly applied to photo in Darkroom.

Context

Screenshots My style: obraz

Screencast

Which commit introduced the error

System

  • darktable version : 55d8a7a
  • OS : Win11
  • Linux - Distro : e.g. Ubuntu 18.4
  • Memory : 32GB
  • Graphics card : Intel A770 16GB
  • Graphics driver :
  • OpenCL installed :
  • OpenCL activated :
  • Xorg :
  • Desktop :
  • GTK+ :
  • gcc :
  • cflags :
  • CMAKE_BUILD_TYPE :

Additional context

  • Can you reproduce with another darktable version(s)? yes with version x-y-z / no
  • Can you reproduce with a RAW or Jpeg or both? RAW-file-format/Jpeg/both
  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes/no
  • If the issue is with the output image, attach an XMP file if (you'll have to change the extension to .txt)
  • Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? yes/no
  • Do you use lua scripts?
  • What lua scripts start automatically?
  • What lua scripts were running when the bug occurred?

BW Across 100.zip

Duplicate of #262

Closing as duplicate. Keep following the original issue.

Currently, lensfun-update-data downloads calibration data from https://lensfun.sourceforge.net/db/ and https://wilson.bronger.org/lensfun-db/. I'm not sure how those are hosted and updated, but I believe both are reliant on @bronger.

As the updates are just files hosted on a web server it would be possible to use https://lensfun.github.io for hosting. The site already updates with every new lens (make-lenslist workflow), so just building the update files and putting them in a folder would be easy.

GitHub pages have "a soft bandwidth limit of 100 GB per month" equivalent to more than 250,000 updates (minus the normal traffic). I don't have any usage data but it feels like this should be sufficient.

I have already played a bit with this, and if there are no objections to the idea, I could open a pr.

Edit: I have created three pull requests for doing this.

  1. https://github.com/lensfun/lensfun/pull/2132 Preparation: Python script for creating the update files
  2. https://github.com/lensfun/lensfun.github.io/pull/9 The workflow for generating the files and deploying the site
  3. https://github.com/lensfun/lensfun/pull/2133 updating the sources in lensfun-update-data

Good idea! So, a job on GitHub that is triggered after every commit would update the DB files?

Yes, so the make-lenslist workflow creates a commit in the lensfun.github.io repo which triggers the pages-build-deployment workflow. This workflow (default GitHub Pages behavior) can be replaced by a custom workflow that first builds the update files. The files would not be committed to the repo, so it doesn't grow in size.

I suggest to replace SourceForge with GitHub. I can continue to operate my own site as a backup, but would not object to letting it fade out over the next two or three years.

Coding style: artificially put something in IFs to avoid empty statements

Doesn't change a thing though.

Import: show the number of selected files in a string

Apparently, users expect something to be imported even if they selected nothing.

Description of the bug

Ansel freezes by applying style from the darkroom.

To Reproduce 1. Open picture in darkroom 2. Apply style 3. Ansel is frozen

Expected behavior

Style is applied and ansel continues to work Screenshots Bildschirmfoto 2023-12-08 um 22 26 02

Which commit introduced the error

Version compiled based on commits of wed. 6/12 (22:00) works.

System

  • Ansel version : 6083b54de
  • OS : OSx (13.6.1)
  • Memory : 16 GB
  • Graphics card : M2 Pro
  • Graphics driver :
  • OpenCL installed : yes
  • OpenCL activated : yes
  • Xorg :
  • Desktop :
  • GTK+ : 3.24.38_1
  • gcc : Apple clang version 15.0.0 (clang-1500.0.40.1)
  • cflags :
  • CMAKE_BUILD_TYPE : Release

Darkroom: clean up mouse events

Same as bauhaus, handle param commits and dispatch pipe recomputes only on button_released and mouse_moved during drag.

Bypass mask callbacks when masks are not shown.

Description of the bug

When I open the crop module and start dragging, it works as expected as long as the "working..." label is visible. When "working..." disappears, the crop area resets to the original area (before I started dragging) and doesn't respond to drag anymore. The sliders do change, but not the GUI. When I release the mouse and start a new dragging, the sliders reset to match the (incorrect) GUI area.

To Reproduce

As described in the bug description.

Expected behavior

Dragging should change crop area.

Which commit introduced the error

I believe this one introduced the error: 786d31c191de98d4e283a4a374be3e0a67934e9a but this one does not compile. Compile fixed in the next commit 2ed0c356fc0c049eef751225dcae7e9887e07957.

System

  • darktable version : commit f11f004f31e8010d7941881460319d37e3bfb698
  • OS : e.g. Linux - kernel 6.6.3
  • Linux - Distro : Linux Mint 21.1 Vera
  • Memory : 12 Gb
  • OpenCL installed : no
  • OpenCL activated : no
  • CMAKE_BUILD_TYPE : RelWithDebInfo

Additional context

I started debugging this and narrowed it down to the _set_max_clip function in src/iop/crop.c.

While the "working..." label is visible, the function returns early because of if(self->dev->preview_status != DT_DEV_PIXELPIPE_VALID) return 1;

After that, the function runs to the end, and the clip area is overwritten here

238 g->clip_x = fmaxf(points[4] / self->dev->preview_pipe->backbuf_width, g->clip_max_x);
 1 g->clip_y = fmaxf(points[5] / self->dev->preview_pipe->backbuf_height, g->clip_max_y);
 2 g->clip_w = fminf((points[6] - points[4]) / self->dev->preview_pipe->backbuf_width, g->clip_max_w);
 3 g->clip_h = fminf((points[7] - points[5]) / self->dev->preview_pipe->backbuf_height, g->clip_max_h);

The commit which introduced the bug, removed another return point from the function //if(g->clip_max_pipe_hash == self->dev->preview_pipe->backbuf_hash) return 1;

I haven't checked, but I believe it (accidentally?) prevented the overwriting of the clip area.

known, WIP.

It's no accident, the way pipeline recomputations are dispatched has been changed globally in the software and the crop and perspective modules are exceptions. I needed to fix the general case first.

Fixed by 7472faeef61fdb96ec982043ec468cc59179330e

Develop: implement zoom-only invalidation

In darkroom, when moving the main view, there is no need to resync the pipeline, just update the pipe ROI.

Pixelpipe: cache GPU pipe states in RAM between modules

It's a necessary condition to make the cache work consistently with masks preview without having to invalidate everything. The perf penalty of having to do I/O GPU -> RAM is largely absorbed when using the most expensive modules and/or long pipelines.

Develop: work-around another history_mutex deadock

Some module probably calls a method locking history_mutex internally in its gui_update() method. This workaround is mildly unsafe, but until we find the culprit…

Description of the bug

Trying to open files developed in older version (up to d825e81-win64) with negadoctor in Darkroom Atelier Ansel stucks in loop. I have to force stop Ansel. Happens to all images developed with Negadoctor.

Win10, i7-2600, Geforce GTX 1050ti ansel-1dbee19-win64

Can you start the program in terminal with -d all and give the output ? I don't reproduce here.

I don't reproduce on ubuntu 22.04. Using the last commit 1dbee19. You should give us more information. What do you do exactly? Are you using the Appimage? Do you build Ansel? If yes, how?

  1. ansel-log.txt

  2. Start Ansel, which opens in Lighttable Atelier. Choose Folder from Library panel. Choose Picture in Center Panel Double Click left Mouse Button on Image (.tif) from collection, which had been developed in an older version of Ansel with Negadoctor module. (This happens only to pictures with Negadoctor module in the pipeline.) Ansel gets stucked.

Running ansel-1dbee19-win64.exe on Windows 10, i7-2600, Geforce GTX 1050ti.

No, i do not build myself.

Hope this helps.

I can reproduce on some DNG, but negadoctor is not concerned

No DNGs here. Only TIFs from Scanners. I give some TIFs without Negadoctor a try, stucks in loop. Tried to import some TIFs and JPGS which results in a stuck loop aswell.

Possibly fixed with f11f004.

It seems negadoctor has nothing to do with that, but non-raw images create a dead-lock.

Shall the title of this issue be changed?

Not until you confirm the the fix fixed it

As it is not related only to Negadoctor module, but for non-raw images handling?

yes

:+1:

Fixed ?

I found the exact issue.

Overall looking great.

A couple of ideas:

  • The on/off icons of the left panel modules should be aligned to the right

  • Since you removed the on/off icons from history entries, they should use all available horizontal space

Ansel-theme-suggestions3-github

  • Since now the export dialog is on its own popup window, there is no need for the Arrow before "Export". It's not collapsible anymore, is it?

  • Text fields should use all available horizontal space (I'm talking in particular about "Taille" in this example), with some padding around them.

  • Dropdowns should look clickable. If my mockup is too visually distracting, they should look like buttons at least on hover, like buttons on module headers do throughout the UI.

  • Throughout the ui, buttons and dropdowns (clickable stuff) should be lighter, and text fields darker.

Ansel-theme-suggestions1-github

  • In-module buttons should behave the same as buttons in module headers, that is, they should look like buttons at least on hover.

  • Sliders should use all available horizontal space. If there is an icon next to a slider (eyedropper, show mask), that shouldn't affect the size of subsequent sliders.

  • Icons next to sliders should be aligned with the slider itself. Atm they look too high.

  • Slider values should also be in line with the slider itself. Ideally the slider name would also be in line, but given that Ansel has some very long ones, that vary in length depending on the language, I left the slider name above. Other software like Capture One, has a text field where the slider value is, so that it can easily be changed. I didn't add this since Ansel has the right click option. It could be added for easier discoverability, if needed.

Ansel-theme-suggestions2-github

  • Text and Combobox alignment. This is one of the strangest looking ones in Ansel/darktable. Generally they Text is aligned left, and comboboxes aligned right. This uses all available horizontal space, but is terrible for legibility, and generally looks odd. Two options come to mind:
  • Align everything left. To me it looks a bit messy and less pleasing, but it's easier to read:

Ansel-theme-suggestions1b-github

  1. Choose some vertical axis around which to organise content. You can even center align, and it would look fine, and be easier to read. Lightroom and Capture one choose this approach.

Ansel-theme-suggestions1c-github

I like most of your idea, except the last one.

Text and Combobox alignment. This is one of the strangest looking ones in Ansel/darktable. Generally they Text is aligned left, and comboboxes aligned right. This uses all available horizontal space, but is terrible for legibility, and generally looks odd.

Most software are designed to align to both side (that is, text on one side, entries or dropdowns on the other side), I think this is generally accepted by most people, for example:

image

image

I am afraid that's your personal opinion, because I don't think they are odd or hard to read.

Two options come to mind:

Your two prototypes actually are harder to read for me 😸️. Long and short lines is too noisy for me.

Thanks for your answer. Your two examples have one thing going for them that Ansel doesn't: borders. Also, Krita or Gnome settings are not necessarily the pinnacle of design. My issue is short text on extreme ends of a popup window or module makes it hard to read. Particularly when it's a long list, like in the export module.

Capture one aligns text to the right so that is right next to the combobox. It also aligns comboboxes and sliders vertically, so that it's easier to see. The problem is Ansel has very long text labels, whereas Capture One usually uses one word per slider/combobox/text field.

imaxe

Lightroom also aligns text to the right so that it is right next to the sliders

imaxe

So does Affinity photo:

imaxe

Capture one aligns text to the right so that is right next to the combobox. It also aligns comboboxes and sliders vertically, so that it's easier to see. The problem is Ansel has very long text labels, whereas Capture One usually uses one word per slider/combobox/text field. Lightroom also aligns text to the right so that it is right next to the sliders So does Affinity photo:

I think what makes them easier to read for me than your last image is: they are not aligned to axis in center, but aligned to right. Their entries/sliders/dropdowns takes most space, so it is not so noisy for eyes. Align to right is not bad, but align to center is not good haha

The text is aligned to the right. That I think is good. And text in comboboxes is aligned left. I also think that's good. That's like an imaginary axis between text and comboboxes. If i put my axis in the middle for the mockup, is because text labels in Ansel are much longer, so they need more space. I could add a second line, but I'm unsure how much clearer that would be.

Here is another version. Center axis, Text aligned right, combobox aligned left

Ansel-theme-suggestions1d-github

Although ideally I'd like to stick to UI conventions, like virtually all other software does. Off center vertical axis for alignment like you suggested, Label aligned right, Comboboxes aligned left, Slider lables in line.

Ansel-theme-suggestions1e-github

The text is aligned to the right. That I think is good. And text in comboboxes is aligned left. I also think that's good. That's like an imaginary axis between text and comboboxes. If i put my axis in the middle for the mockup, is because text labels in Ansel are much longer, so they need more space. I could add a second line, but I'm unsure how much clearer that would be.

Here is another version. Center axis, Text aligned right, combobox aligned left

Ansel-theme-suggestions1d-github

Still, this looks worse than what we have currently for people like me. At least align a row to one side should be easier to read, and I think we should make dropdowns fill the empty space in a row.

On the right of some sliders and some comboboxes, we have buttons (color pickers or the rotate button in crop). That's different from most apps. The current choice is to have all text above sliders and reserve the rightmost space for the color-picker, whether or not there is actually one.

Don't use my screenshots to draw conclusions, the theming is very rough and I didn't wired all bits. For example, I didn't remove the on/off buttons in history module, it's just that there is no minimal heigh/width declared for them in my theme, so Gtk gives them 0px and you don't see them.

Putting comboboxes in buttons is tricky and introduces side effects. Buttons means you need external margins and internal padding. Right now, we only have margins, which gives a more compact spacing, allowing to fit more content. Also, all those boxes feel bulky.

Buttons are meant to set actionnable text aside dead labels and clearly inform that this is a clickable widget. For comboboxes, we already have the chevron stating that some effect is to be expected. For slider, we have the usual knob. I don't think being more catholic than the pope on visual consistency is going to improve legibility here.

On the right of some sliders and some comboboxes, we have buttons (color pickers or the rotate button in crop). That's different from most apps. The current choice is to have all text above sliders and reserve the rightmost space for the color-picker, whether or not there is actually one.

I understand that that's different from some apps, although if you take a look at the screenshot from Capture one above, you will see a label, a combobox, and an eyedropper button on the same line. I am also aware of the current choice of text above sliders and reserving the rightmost space for color-picker. I just don't think it's such a good one. With your scrolling fix, the ability to expand side bars, your click on label and scroll feature, right click for entering a value and, fine adjustment tail-wiggly thingy, and finally no recompute until drag AND drop, precise adjustments are no longer a problem. I think sacrificing a tiny bit of horizontal space in favor of clarity would be a welcome improvement.

Don't use my screenshots to draw conclusions, the theming is very rough and I didn't wired all bits. For example, I didn't remove the on/off buttons in history module, it's just that there is no minimal heigh/width declared for them in my theme, so Gtk gives them 0px and you don't see them.

I see, thanks for the clarification.

Putting comboboxes in buttons is tricky and introduces side effects. Buttons means you need external margins and internal padding. Right now, we only have margins, which gives a more compact spacing, allowing to fit more content. Also, all those boxes feel bulky.

I understand, and I really appreciate you being a stickler for KISS. However I'm not sure keeping everything as crammed together as possible is such a good idea. Again, I have a small screen, and don't mind a little more negative space for clarity in the interface. Perhaps you disagree. Yes, I did them by eye, and they came out a bit bulky. They could be much thinner if you like. What matters to me is the clarity in the interaction. A slight highlight of the text going from close to mid-grey to white or close to white is too subtle in my opinion. The chevron is an indicator that that may be a drop down, but when nothing visibly changes on hover, it's not so obvious.

Buttons are meant to set actionnable text aside dead labels and clearly inform that this is a clickable widget.

Which kind of makes my point.

For comboboxes, we already have the chevron stating that some effect is to be expected.

I don't think this is enough.

For slider, we have the usual knob. I don't think being more catholic than the pope on visual consistency is going to improve legibility here.

The new sliders are great.

I took some of @AlynxZhou 's feedback and made this last mockup. I think it's better than my previous ones, and hopefully improves over the current design. Vertical axis for alignment, consistent margins (including top icons), two clear asymmetrical sections, a smaller one for labels (aligned right), and a larger one for comboboxes and text fields (aligned left), section titles are no longer centered but aligned left, and slider labels as well as values are aligned with the slider itself, much like comboboxes and their respective labels.

Ansel-theme-suggestions1h-github

Aligning text and boxes on columns like that requires a full rewrite. Widgets don't know about each other, they only know their own width and label + value are a full pack. To align text like that, you need to know beforehand what widgets will be shown and traverse the whole list to find the widest. But, especially for export, the content of the box depends on the file format chosen, which loads its own set of controls. We are verging on over-engineering here.

Examples from native Gnome stuff might not be the best design, but since Gnome shares the same constraints with us (Gtk…), it is a good example of what is possible : not much. Align left, align right, align center with some restrictions. We also need to take into account various screen sizes and changing strings sizes depending on translations.

Aligning text and boxes on columns like that requires a full rewrite. Widgets don't know about each other, they only know their own width and label + value are a full pack. To align text like that, you need to know beforehand what widgets will be shown and traverse the whole list to find the widest. But, especially for export, the content of the box depends on the file format chosen, which loads its own set of controls. We are verging on over-engineering here.

Examples from native Gnome stuff might not be the best design, but since Gnome shares the same constraints with us (Gtk…), it is a good example of what is possible : not much. Align left, align right, align center with some restrictions. We also need to take into account various screen sizes and changing strings sizes depending on translations.

When putting this together I was thinking in terms of very basic web design, since this can be achieved with two divs, say the first spans 35% of the parent object, and the other 65%. Labels would go on the first, aligned right, and comboboxes and so on on the second, aligned left. If I understand you correctly, there is no way to separate label from combobox, or label from slider?

If I understand you correctly, there is no way to separate label from combobox, or label from slider?

yes, and in addition, they can (dis)appear dynamically at runtime.

If I understand you correctly, there is no way to separate label from combobox, or label from slider?

yes, and in addition, they can (dis)appear dynamically at runtime.

This they do already, right? So that issue is not new. But there is no way to tell the label and/or the combobox: "You have this much space"?

EDIT. There must be:

imaxe

EDIT2. Gimp preferences are also like this. Their labels are not aligned right, but whatever. I'd take left aligned labels over labels and comboboxes at the extremes of the window. Less legible than my previous mockup, but better than the current state.

Ansel-theme-suggestions1i-github

(I don't think the current alignment is bad, I have many apps aligning to both side and it is fine to me.)

EDIT2. Gimp preferences are also like this. Their labels are not aligned right, but whatever. I'd take left aligned labels over labels and comboboxes at the extremes of the window. Less legible than my previous mockup, but better than the current state.

I also like this one, it looks aligned to both side, too.

I'm starting to think that comboboxes should use all available horizontal space within modules. Not sure about sliders. If we want them to align nicely, probably more horizontal space for the side panel is needed. Unless we forget about having Slider labels in line with sliders.

Ansel-theme-suggestions2c-github

Resolves https://github.com/darktable-org/darktable/issues/2802

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (feab273) 58.64% compared to head (147c30d) 58.64%.

@@ Coverage Diff @@
## develop #583 +/- ##
========================================
 Coverage 58.64% 58.64% 
========================================
 Files 246 246 
 Lines 14332 14332 
 Branches 1952 1952 
========================================
 Hits 8405 8405 
 Misses 5808 5808 
 Partials 119 119 
Flag Coverage Δ
benchmarks 8.60% <ø> (ø)
integration 46.80% <ø> (ø)
linux 56.59% <ø> (ø)
macOS 19.20% <ø> (ø)
rpu_u 46.80% <ø> (ø)
unittests 17.59% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@kmilos thank you!

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Commit

Permalink

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Browse files Browse the repository at this point in the history

PThreads: improve debug

Loading branch information

aurelienpierre

committed Dec 8, 2023

1 parent

0d430f6

commit 1dbee19

Showing 1 changed file with 2 additions and 0 deletions .

There are no files selected for viewing

2 changes: 2 additions & 0 deletions

2

src/common/dtpthread.h

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -118,6 +118,7 @@ static inline int dt_pthread_mutex_lock_with_caller(dt_pthread_mutex_t *mutex, c

mutex -> time_sum_wait += wait ;

char * name = mutex -> name ;

snprintf (mutex -> name , sizeof (mutex -> name ), "%s:%d (%s)" , file , line , function );

fprintf (stdout , "Thread lock %s acquired\n" , mutex -> name );

int min_wait_slot = 0 ;

for (int k = 0 ; k < TOPN ; k ++ )

{

Expand Down

Expand Up

@@ -173,6 +174,7 @@ static inline int dt_pthread_mutex_unlock_with_caller(dt_pthread_mutex_t *mutex,

char * name = mutex -> name ;

snprintf (mutex -> name , sizeof (mutex -> name ), "%s:%d (%s)" , file , line , function );

fprintf (stdout , "Thread lock %s released\n" , mutex -> name );

int min_locked_slot = 0 ;

for (int k = 0 ; k < TOPN ; k ++ )

{

Expand Down

0 comments on commit

Please sign in to comment.

You can’t perform that action at this time.

Darkroom: lock pipe threads while we init history

No point in letting pipes start just now.

Revert "Develop: raise signal HISTORY_CHANGE on dt_dev_read_history"

This reverts commit 198823ac62f7cc1cf17b9525f776d074e2fdbc8e.

Signal history changed: emit in GUI code, not history

Might prevent the signal to be dispatched to un-inited widgets.

Develop: raise signal HISTORY_CHANGE on dt_dev_read_history

Will let the history module know that it's time to update its GUI.

develop: remove a lock triggering a deadlock

Ideally, this bit should capture the mutex lock, but I can't find where in the locked flow the same lock could be captured again.

darkroom: fixes and refactors

  • dedup the code to switch images from filmroll, use high-level view switching control flow for unset/reset,
  • don't wait to capture the pipe mutexes to set the gui_leaving flag. It is supposed to interrupt the pipes before they exit, waiting for the mutex lock defeats the purpose.

Try to make CMake file easier to understand and handle desktop entry better.

I'll do a rebase for this to solve the problem I found in https://github.com/aurelienpierreeng/ansel/commit/98e42fe27217a91e707a264e15f57ec03b1aba29#r134536199

I'll do a rebase for this to solve the problem I found in 98e42fe#r134536199

Done.

Thanks a lot !

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage 2.2% Duplication on New Code

See analysis details on SonarCloud

Describe the bug

This is a follow-up of issue 2831. In this case it concerns the std:regex used in the reading of IPTC date and time values.

To Reproduce

Steps to reproduce the behavior: 1. File made for testing by me exiv2-iptc-regex-utc64-1 2. Start UCRT64 MSYS2 gimp3 or self-built gimp master with up-to-date exiv2 package for UCRT64 (0.28.1-2) that includes the previous regex patch. 3. Open the above image and notice extreme slowdown and huge memory use.

Expected behavior

GIMP finishing opening the image in a short time.

Desktop (please complete the following information):

  • OS and version: Windows 10 Home, 64-bit
  • Exiv2 version and source: see above, but also self-built under MSYS2 UCRT64 with added printf statements to see where the slowdowns happened
  • Compiler and version: gcc 13.2.0
  • Compilation mode and/or compiler flags: Release

Additional context

By adding printf statements I observed the slowdowns this time in src/value.cpp : 1. in DateValue::read the line static const std::regex reExtended(R"(^(\d{4})-(\d{2})-(\d{2}))"); possibly also the next line but I had to walk away for a while and when I came back it was in 2. TimeValue::read, slowdowns observed both in

  • static const std::regex re(R"(^(2[0-3]|[01][0-9]):?([0-5][0-9])?:?([0-5][0-9])?$)");

  • static const std::regex reExt( R"(^(2[0-3]|[01][0-9]):?([0-5][0-9]):?([0-5][0-9])(Z|[+-](?:2[0-3]|[01][0-9])(?::?(?:[0-5][0-9]))?)$)");

I then gave up waiting, must have been close to half an hour at least.

Note: there seem to be a few more places using regex that may or may not be affected

@lb90

getting rid of std::regex is quite challenging.

Best solution would be to adopt a constexpr regex library. I don't know if it's a good idea to add yet another dependency.

History locks: rework the inter-thread safety

  • Fix a dead-lock
  • Synchronize dt_iop_gui_update on history lock, to avoid refreshing GUI with transient parameters while history handling is busy
  • Protect only "public" dev history handling methods with mutex locks. Custom stuff will need custom handling.

This is cosmetic, and I know that there are more fundamental broken pieces with the scopes and the color picker.

I've just taken a look at Raw therapee, and its way of handling the color picker tool is partly better than in Ansel. When we pick several values, and add them to the list in the color picker module, a crosshair is placed on the viewport, while the values are displayed in the module itself.

Raw therapee displays the swatch on the picture, as well as its values. This makes it easier to identify where each swatch was sampled from.

imaxe

They don't need to be displayed all the time, perhaps only on hover, but it is a helpful feature. We also don't need to display one swatch in RGB, another in HSV, and another in Lab. Just whatever is selected in the color picker module.

Just to be clear, what I'm proposing is keep what we have, but make it possible to display the values directly on the picture as well.

Its placement always bugged me from a UI POV.

Shouldn't it be a button somewhere on the bottom toolbar, with a popup? Either as a popup window like the new import window, or like the calendar widget in the same dialog?

It would free up a lot of space in the left panel for actual modules, while remaining readily accessible.

Shouldn't it be a button somewhere on the bottom toolbar, with a popup? Either as a popup window like the new import window, or like the calendar widget in the same dialog?

Sometimes I want to quickly check metadata for different images, so I want it is on side panel. With a popup I need to open it on each time I switch image, or move the main window to not hover the popup, which is inconvenient.

Shouldn't it be a button somewhere on the bottom toolbar, with a popup? Either as a popup window like the new import window, or like the calendar widget in the same dialog?

Sometimes I want to quickly check metadata for different images, so I want it is on side panel. With a popup I need to open it on each time I switch image, or move the main window to not hover the popup, which is inconvenient.

Is there no way to make a popup stay open? I imagine a button on the button toolbar, with a popup that stays open until clicked again.

Also, in other software (lightroom, capture one), metadata is just one more module. @aurelienpierre has said that in ansel it is pinned to the bottom to always be accessible without having to scroll. My suggestion would take care of that requirement, and make the ui more sane (since it's the only module placed there, and its styling looks different, and when opened, it takes pretty much all the space from other left panel modules...)

Shouldn't it be a button somewhere on the bottom toolbar, with a popup? Either as a popup window like the new import window, or like the calendar widget in the same dialog?

Sometimes I want to quickly check metadata for different images, so I want it is on side panel. With a popup I need to open it on each time I switch image, or move the main window to not hover the popup, which is inconvenient.

Is there no way to make a popup stay open? I imagine a button on the button toolbar, with a popup that stays open until clicked again.

If u make metadata in a dialog, it will be covered by the main window when switch image, because I typically use it fullscreen.

Shouldn't it be a button somewhere on the bottom toolbar, with a popup? Either as a popup window like the new import window, or like the calendar widget in the same dialog?

Sometimes I want to quickly check metadata for different images, so I want it is on side panel. With a popup I need to open it on each time I switch image, or move the main window to not hover the popup, which is inconvenient.

Is there no way to make a popup stay open? I imagine a button on the button toolbar, with a popup that stays open until clicked again.

If u make metadata in a dialog, it will be covered by the main window when switch image, because I typically use it fullscreen.

It could also be placed in a popup widget, like the calendar in the new import dialog. Or, you know, a regular module in the left panel.

Both in w:h format as well as Mpx for ease of use.

To be fair, the crop module already shows w:h when you are clicking and dragging the crop rectangle. IMO this info should be always visible in the module's body, to know how much resolution you are losing (or to know how large you can print).

The framing module doesn't show this info at all. It is necessary for figuring out print size.

Exiv2::Image::UniquePtr image = Exiv2::ImageFactory::open(“d:/中文/test.jpeg”);//error

What version? What OS?

Probably a duplicate of https://github.com/Exiv2/exiv2/issues/2637?

When I think of zooming in or out i don't think of the navigation module, but of the viewport. I think the zoom levels should be moved to the toolbar next to the clipping warning, color assessment, etc.

That way the navigation module will be used just for navigation.

Also, its icon is extremely unclear. It looks like a "fullscreen" icon. It should be some sort of magnifying glass instead.

Also, its icon is extremely unclear. It looks like a "fullscreen" icon. It should be some sort of magnifying glass instead.

I guess the "fullscreen" icon only means "auto fit viewport", if you zoom the image, it will show the actual value instead of that icon. So I think the icon is not so bad for "auto fit viewport".

The one in Raw Therapee has a magnifying glass with the "fullscreen" icon inside. A bit more clear that it indicates something related to zoom level

It makes sense to me that the zooming is close to the paning window. You often use them one after the other so it reduces the distance to travel with the mouse.

Generally fine for landscape orientation. For portrait orientation it results in a tiny picture on my high dpi display. It should be resizable like the scopes module.

When you export an image, the app provides info on two States: "exporting", and "exported". In lower end computers there is no way of knowing if the pipeline is just heavy, or the computer froze. A progress bar with a percentage would be great.

There is a progress bar for multiple exports. Synchronizing the progressbar with the pipeline progression would make the export even slower.

There is a progress bar for multiple exports. Synchronizing the progressbar with the pipeline progression would make the export even slower.

Well.. Sort of. It adds a third state and tells you image 1 of n is finished. I don't have a solution to this problem, but an export progress bar is pretty much ubiquitous in any software, from web browsers, to image editors, video editors, 3d software, DAWs... It is useful to know if your image is just taking longer to export because of a heavy pipeline or if your computer froze.

Also, if you cancel the export, you no longer get the exporting image 1/1. You just get "exporting image", which doesn't give you any more confidence that your computer is not freaking out.

To solve:

  1. in pixelpipe_hb.c, in dt_dev_pixelpipe_process_rec(), map the pos argument to a completion coeff,
  2. the function calls itself recursively from the end of the pipe, with pos = number of modules in pipe (typically, 83), until pos == 0, meaning we load the raw buffer into pipeline,
  3. store initial_pos = pos at the first call, then progress is computed from pos / initial_pos when running each module process() or fetching input buffer from cache. But the caching vs. processing anew paradigm makes it hard to tell if, for a given pos, we are currently processing or only calling the module (n - 1)
  4. find a way to communicate the current valid pos / initial_pos completion coeff to the GUI thread updating the progress bar. Doing so in a thread-safe way, taking into account that not all exports have a GUI and therefore a GUI progress bar, is going to be unreliable and prone to segfaults with current code.

TL;DR: a serious clean-up is needed before even thinking of implementing that.

Pixel cache : any other weight than 0 makes it unreliable.

I don't understand why.

Pixelpipe: refactor GPU code

Move the OpenCL path to its own function for better legibility

Develop: unload the image at dev cleanup

Releases the thread locks properly

Merge pull request #582 from LebedevRI/shorten-64-to-32

Fully enable -Wshorten-64-to-32 on 32-bit builds too

Codecov Report

Attention: 8 lines in your changes are missing coverage. Please review.

Comparison is base (bd9074a) 58.65% compared to head (e5f55c7) 58.64%.

Files Patch % Lines
...rawspeed/decompressors/PanasonicV4Decompressor.cpp 33.33% 2 Missing :warning:
...rawspeed/decompressors/PanasonicV5Decompressor.cpp 33.33% 2 Missing :warning:
src/utilities/rstest/MD5Benchmark.cpp 0.00% 2 Missing :warning:
src/librawspeed/common/DngOpcodes.cpp 0.00% 1 Missing :warning:
src/utilities/rstest/rstest.cpp 66.66% 1 Missing :warning:
@@ Coverage Diff @@
## develop #582 +/- ##
===========================================
- Coverage 58.65% 58.64% -0.02% 
===========================================
 Files 246 246 
 Lines 14324 14332 +8 
 Branches 1952 1952 
===========================================
+ Hits 8402 8405 +3 
- Misses 5803 5808 +5 
 Partials 119 119 
Flag Coverage Δ
benchmarks 8.60% <43.33%> (+0.01%) :arrow_up:
integration 46.80% <65.21%> (-0.01%) :arrow_down:
linux 56.59% <74.07%> (-0.02%) :arrow_down:
macOS 19.20% <41.66%> (+0.01%) :arrow_up:
rpu_u 46.80% <65.21%> (-0.01%) :arrow_down:
unittests 17.59% <30.00%> (-0.01%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Pixelpipe: don't bypass cache for the last step

Breaks global color-picker. Wouldn't if the design was not shitty. To be fixed later.

Remove uses of preview_loading, image_loading and first_load

Formerly used to ensure we have an history and an image before doing GUI stuff. Now we know that we can't enter darkroom without both.

Darkroom pipelines : unify picture loading

  • Try to open the source image buffer (RAW) before entering darkroom,
  • If no image buffer, forbid access to darkroom,
  • Capture the mipmap lock globally in darkroom.

Benefits : - We know image history is inited when building the GUI, no need for hacks (and layers of mutex) to prevent modules GUI to update before history, - The thumbnail pipeline does not have to wait for the main pipeline to finish and release the mipmap lock, - Code much cleaner and fewer contextual behaviours and silly flag states to handle.

Develop: remove the history_updating member

Used to disable mask editing when history is initialising, but GUI has no business caring about history.

Adding new history items while history is read should already be prevented by history mutex locks, then taking user input while GUI is reset should be prevented by darktable.gui->reset

No need for another hacky & contextual layer.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

darktable-org

/

rawspeed

Public

Notifications

Fork 116

Star 324

Commit

Permalink

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Browse files Browse the repository at this point in the history

Cr2Decoder: Canon ColorData4, ver.3: is at

Loading branch information

LebedevRI

committed Dec 6, 2023

1 parent

acbfef9

commit bd9074a

Showing 1 changed file with 1 addition and 1 deletion .

There are no files selected for viewing

2 changes: 1 addition & 1 deletion

2

src/librawspeed/decoders/Cr2Decoder.cpp

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -320,7 +320,7 @@ getBlackAndWhiteLevelOffsetsInColorData(ColorDataFormat f,

case 2 :

return std::nullopt; // Still no SpecularWhiteLevel.

case 3 :

return {{231 , 617 }};

return {{231 , 629 }};

case 4 :

case 5 :

return {{692 , 697 }};

Expand Down

0 comments on commit

Please sign in to comment.

You can’t perform that action at this time.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Commit

Permalink

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Browse files Browse the repository at this point in the history

ansel.desktop : Complete path to icon

Loading branch information

Jiyone

authored and aurelienpierre committed Dec 6, 2023

1 parent

23fb7b3

commit 8eedde7

Showing 1 changed file with 1 addition and 1 deletion .

There are no files selected for viewing

2 changes: 1 addition & 1 deletion

2

data/ansel.desktop.in

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -19,6 +19,6 @@ StartupNotify=true

MimeType =application/x-ansel;image/x-dcraw;image/jpeg;image/jpg;image/jp2;image/png;image/tiff;image/x-portable-pixmap;image/x-portable-floatmap;image/x-exr;

Icon =ansel

Icon =${CMAKE_INSTALL_PREFIX}/share/icons/hicolor/scalable/apps/ ansel.svg

X-Unity-IconBackgroundColor =#252525

0 comments on commit

Please sign in to comment.

You can’t perform that action at this time.

WHAT'S THE PROBLEM?

The icon path in ansel.desktop doesn't point to Ansel's icon folder / file and the program doesn't show the Ansel's icon. Icon=ansel

WHAT THIS PR DOES?

It points no to ansel.svg in Ansel's program folder so the system find what icon to use. Icon=${CMAKE_INSTALL_PREFIX}/share/icons/hicolor/scalable/apps/ansel.svg

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 0 Code Smells

No Coverage information No Coverage information 4.8% 4.8% Duplication

idea Catch issues before they fail your Quality Gate with our IDE extension sonarlint SonarLint

[CI] windows: DON'T update the MSYS2

I'm not really sure why we were doing that, we should just consistently list dependencies in install/packboy sections.

[CI] macOS: DON'T update the brew packages

I'm not really sure why we were doing that, we should just consistently list dependencies in Brewfile

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

Ansel crashes with pipeline caches clear #250

Closed

mmoqui opened this issue Dec 6, 2023 · 2 comments

Closed

Ansel crashes with pipeline caches clear

250

mmoqui opened this issue Dec 6, 2023 · 2 comments

Labels

priority: critical

Affects basic and core functionnalities of the software in a way that prevents it to work at all

regression

Comments

Copy link

mmoqui

commented

Dec 6, 2023

Ansel 0.0.0+545~g2ed0c356f Build from source, HEAD on commit 2ed0c35

Description of the bug

Open Ansel with -d all : the previous state of the lighttable is displayed: it has only one RAW.

Click on the RAW and then on the select button: a thumbnail of the RAW is computed

The thumbnail becomes black (ok, this is a known bug related to the cache).

Then click on Run > Clean all pipeline caches

The bug occurs also by applying only steps 1 and then 4.

Actual behavior

Ansel crashes with the following output:

Expected behavior

The caches are cleaned and the thumbnail should be recomputed (?)

System

Ansel version : 0.0.0+545~g2ed0c356f

OS : Linux 5.15.0-89

Linux - Distro : LinuxMint 21.2

Memory : 32Go

Graphics card : NVidia GeForce MX330

Graphics driver : nvidia-driver-545

OpenCL installed : yes

OpenCL activated : yes

Xorg core : 21.1.4

Desktop : Cinnamon 5.8.4

GTK+ : 3.24.33

gcc : 12.3.0

The text was updated successfully, but these errors were encountered:

pedrorrodriguez

added

priority: high

Affects basic and core functionnalities of the software in a way that severly degrades usability

regression

priority: critical

Affects basic and core functionnalities of the software in a way that prevents it to work at all

and removed

priority: high

Affects basic and core functionnalities of the software in a way that severly degrades usability

labels

Dec 7, 2023

Copy link

Collaborator

aurelienpierre

commented

Dec 8, 2023

Should be fixed now ?

Copy link

Author

mmoqui

commented

Dec 8, 2023

Yes, with all the previous commits, the bug is fixed. I close the bug.

mmoqui

closed this as completed

Dec 8, 2023

Sign up for free to join this conversation on GitHub . Already have an account? Sign in to comment

Labels

priority: critical

Affects basic and core functionnalities of the software in a way that prevents it to work at all

regression

3 participants

You can’t perform that action at this time.

please add rw2 tag "JpgFromRaw2" support to get bigger preview image for some rw2 image

exiv2 process rw2 tag PreviewImage 0x002e

according to exiftool, another tagid of preview image for rw2 is 0x0127 PanasonicRaw

I tried it myself and was able to successfully read it

  1. add tag id to src/panasonicmn_int.cpp
...
{0x0118, "RawDataOffset", N_("Raw Data Offset"), N_("Raw data offset"), IfdId::panaRawId, SectionId::panaRaw,
 unsignedLong, -1, printValue},
 {0x0127, "JpgFromRaw2", N_("JpgFromRaw2"), N_("JpgFromRaw2"), IfdId::panaRawId, SectionId::panaRaw, undefined,
 -1, printValue},
 {0x8769, "ExifTag", N_("Exif IFD Pointer"), N_("A pointer to the Exif IFD"), IfdId::panaRawId, SectionId::panaRaw,
 unsignedLong, -1, printValue},
...
  1. add tag info to src/preview.cpp
const Loader::LoaderList Loader::loaderList_[] = {
...
{"image/x-panasonic-rw2", createLoaderExifDataJpeg, 12},
...
}
....
const LoaderExifDataJpeg::Param LoaderExifDataJpeg::param_[] = {
....
 {"Exif.PanasonicRaw.JpgFromRaw2", nullptr}, // 12
};

Thank you very much for your help.

Description of the bug

Opening Ansel, it starts in latest state of view, here Lighttable without sidepanels, picture size maximized (only one picture). The upgrade from ansel-d825e81-win64.exe to the latest ansel-96f0d97-win64.exe does not allow to bring back the side panels, it crashes.

To Reproduce

Leave Ansel d825e... in Lighttable View without side panel pictue to max. size Update from ansel-d825e81-win64.exe to ansel-96f0d97-win64.exe, open Ansel. TAB to open side panels. Crash note.

When leaving ansel-d825e81-win64.exe in Lighttable view with sidepanels and multiple pictures. Update to and start Ansel 96f0d97... TAB to change view of sidepanels crashes too.

Expected behavior

Change view of sidepanels in Lighttable view by pressing TAB.

System

Win10, i7-2600, Geforce GTX 1050ti ansel-96f0d97-win64.exe

ansel_bt_YZWQF2.txt

Fixed now ?

It is no more an issue since next version. Thanks a lot.

WHAT DOES THIS PR DO?

This PR adds code to apply --sudo and --force options to the clean-build and clean-install functions in build.sh

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 0 Code Smells

No Coverage information No Coverage information 4.8% 4.8% Duplication

idea Catch issues before they fail your Quality Gate with our IDE extension sonarlint SonarLint

Merge remote-tracking branch 'upstream/pr/580' into develop

  • upstream/pr/580: copyPixelsImpl(): add padding-less special-casing back copyPixels(): ensure that we do end up with memcpy() copyPixels(): reimplement over a Array2DRef copyPixels(): is only ever called to actually copy something. Array2DRef: pitch is non-negative and is at least as large as width copyPixels(): drop height == 1 specialcase CopyPixels: add benchmark Array2DRef: support CTAD CopyPixelsTest: test more invariants of copyPixels() copyPixels(): pitches must not be smaller than row size

copyPixelsImpl(): add padding-less special-casing back

Guess what, copying one gigantic memory chunk is faster than copying it per-row.

This, of course, only works if we really do have two compatible memory areas, that have no padding between rows.

copyPixels(): reimplement over a Array2DRef

This is is so slow LOL.

copyPixels(): ensure that we do end up with memcpy()

(or at least memmove()...) This is slightly better, but Contiguous case is still worse.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

aurelienpierreeng

/

ansel

Public

forked from edgardoh/darktable

Notifications

Fork 1.1k

Star 561

Commit

Permalink

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Browse files Browse the repository at this point in the history

Pixelpipe: remove superflous invalidation

Loading branch information

aurelienpierre

committed Dec 6, 2023

1 parent

3446e37

commit 786d31c

Show file tree

Hide file tree

Showing 22 changed files with 74 additions and 88 deletions .

There are no files selected for viewing

12 changes: 6 additions & 6 deletions

12

src/develop/blend_gui.c

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -1081,7 +1081,7 @@ static void _blendop_blendif_tab_switch(GtkNotebook notebook, GtkWidget page,

|| gtk_toggle_button_get_active (GTK_TOGGLE_BUTTON (data -> colorpicker_set_values ))))

{

dt_iop_color_picker_set_cst (data -> module , _blendop_blendif_get_picker_colorspace (data ));

dt_dev_invalidate_all (data -> module -> dev );

dt_dev_invalidate_all (data -> module -> dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (data -> module -> dev );

}

Expand Down

Expand Up

@@ -1131,7 +1131,7 @@ static void _blendop_blendif_details_callback(GtkWidget *slider, dt_iop_gui_blen

if ((oldval == 0.0f ) && (bp -> details != 0.0f ))

{

dt_dev_invalidate_all (data -> module -> dev );

dt_dev_invalidate_all (data -> module -> dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (data -> module -> dev );

}

}

Expand Down

Expand Up

@@ -1618,15 +1618,15 @@ static gboolean _blendif_change_blend_colorspace(dt_iop_module_t *module, dt_dev

dt_iop_gui_blend_data_t * bd = module -> blend_data ;

const int cst_old = _blendop_blendif_get_picker_colorspace (bd );

dt_dev_add_new_history_item (darktable .develop , module , FALSE);

dt_dev_add_history_item (darktable .develop , module , FALSE);

dt_iop_gui_update (module );

if (cst_old != _blendop_blendif_get_picker_colorspace (bd ) &&

(gtk_toggle_button_get_active (GTK_TOGGLE_BUTTON (bd -> colorpicker )) ||

gtk_toggle_button_get_active (GTK_TOGGLE_BUTTON (bd -> colorpicker_set_values ))))

{

dt_iop_color_picker_set_cst (bd -> module , _blendop_blendif_get_picker_colorspace (bd ));

dt_dev_invalidate_all (bd -> module -> dev );

dt_dev_invalidate_all (bd -> module -> dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (bd -> module -> dev );

}

Expand Down

Expand Up

@@ -2045,7 +2045,7 @@ void dt_iop_gui_update_blendif(dt_iop_module_t *module)

if (module -> request_mask_display != (bd -> save_for_leave & ~DT_DEV_PIXELPIPE_DISPLAY_STICKY ))

{

module -> request_mask_display = bd -> save_for_leave & ~DT_DEV_PIXELPIPE_DISPLAY_STICKY ;

dt_dev_invalidate_all (module -> dev );//DBG

dt_dev_invalidate_all (module -> dev , FUNCTION , FILE , LINE );//DBG

dt_dev_refresh_ui_images (module -> dev );

}

}

Expand Down

Expand Up

@@ -2445,7 +2445,7 @@ static void _raster_value_changed_callback(GtkWidget *widget, struct dt_iop_modu

if (reprocess )

{

dt_dev_invalidate_all (module -> dev );

dt_dev_invalidate_all (module -> dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (module -> dev );

}

}

Expand Down

57 changes: 19 additions & 38 deletions

57

src/develop/develop.c

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -217,14 +217,12 @@ void dt_dev_pixelpipe_rebuild(dt_develop_t *dev)

dev -> pipe -> changed |= DT_DEV_PIPE_REMOVE ;

dev -> preview_pipe -> changed |= DT_DEV_PIPE_REMOVE ;

dt_pthread_mutex_unlock (& dev -> history_mutex );

dt_dev_invalidate_all (dev );

dt_dev_invalidate_all (dev , FUNCTION , FILE , LINE );

}

void dt_dev_invalidate (dt_develop_t * dev )

void dt_dev_invalidate (dt_develop_t * dev , const char * caller , const char * file , const long line )

{

dt_times_t start ;

dt_get_times (& start );

dt_show_times (& start , "[dev_process_image] sending killswitch signal on running pipelines" );

dt_print (DT_DEBUG_DEV , "[dev_process_image] sending killswitch signal from %s - in %s:%ld\n" , caller , file , line );

dt_atomic_set_int (& dev -> pipe -> shutdown , TRUE);

Expand All

@@ -234,11 +232,9 @@ void dt_dev_invalidate(dt_develop_t *dev)

dt_pthread_mutex_unlock (& dev -> history_mutex );

}

void dt_dev_invalidate_preview (dt_develop_t * dev )

void dt_dev_invalidate_preview (dt_develop_t * dev , const char * caller , const char * file , const long line )

{

dt_times_t start ;

dt_get_times (& start );

dt_show_times (& start , "[dev_process_preview] sending killswitch signal on running pipelines" );

dt_print (DT_DEBUG_DEV , "[dev_process_preview] sending killswitch signal from %s - in %s:%ld\n" , caller , file , line );

dt_atomic_set_int (& dev -> preview_pipe -> shutdown , TRUE);

Expand All

@@ -248,14 +244,14 @@ void dt_dev_invalidate_preview(dt_develop_t *dev)

dt_pthread_mutex_unlock (& dev -> history_mutex );

}

void dt_dev_invalidate_all (dt_develop_t * dev )

void dt_dev_invalidate_all (dt_develop_t * dev , const char * caller , const char * file , const long line )

{

// Send killswitch ASAP

dt_atomic_set_int (& dev -> pipe -> shutdown , TRUE);

dt_atomic_set_int (& dev -> preview_pipe -> shutdown , TRUE);

dt_dev_invalidate (dev );

dt_dev_invalidate_preview (dev );

dt_dev_invalidate (dev , caller , file , line );

dt_dev_invalidate_preview (dev , caller , file , line );

}

void dt_dev_process_preview_job (dt_develop_t * dev )

Expand Down

Expand Up

@@ -528,7 +524,7 @@ static inline void _dt_dev_load_raw(dt_develop_t *dev, const uint32_t imgid)

void dt_dev_reload_image (dt_develop_t * dev , const uint32_t imgid )

{

_dt_dev_load_raw (dev , imgid );

dt_dev_invalidate_all (dev );

dt_dev_invalidate_all (dev , FUNCTION , FILE , LINE );

}

float dt_dev_get_zoom_scale (dt_develop_t * dev , dt_dev_zoom_t zoom , int closeup_factor , int preview )

Expand Down

Expand Up

@@ -596,8 +592,8 @@ void dt_dev_load_image(dt_develop_t *dev, const uint32_t imgid)

void dt_dev_configure (dt_develop_t * dev , int wd , int ht )

{

// Called only from Darkroom to init drawing size

// fixed border on every side

// Called only from Darkroom to init and update drawing size

// depending on sidebars and main window resizing.

const int32_t tb = dev -> border_size ;

wd -= 2 * tb ;

ht -= 2 * tb ;

Expand All

@@ -616,7 +612,7 @@ void dt_dev_configure(dt_develop_t *dev, int wd, int ht)

if (dev -> image_storage .id > -1 && darktable .mipmap_cache )

{

// Only if it's not our initial configure call, aka if we already have an image

dt_dev_invalidate (dev );

dt_dev_invalidate (dev , FUNCTION , FILE , LINE );

dt_control_queue_redraw_center ();

dt_dev_refresh_ui_images (dev );

}

Expand Down

Expand Up

@@ -883,9 +879,6 @@ void _dev_add_history_item(dt_develop_t dev, dt_iop_module_t module, gboolean

dt_pthread_mutex_unlock (& dev -> history_mutex );

// invalidate buffers and force redraw of darkroom

dt_dev_invalidate_all (dev );

if (dev -> gui_attached )

{

/ signal that history has changed /

Expand All

@@ -902,17 +895,11 @@ void _dev_add_history_item(dt_develop_t dev, dt_iop_module_t module, gboolean

void dt_dev_add_history_item (dt_develop_t * dev , dt_iop_module_t * module , gboolean enable )

{

_dev_add_history_item (dev , module , enable , FALSE);

dt_dev_invalidate_all (dev , FUNCTION , FILE , LINE );

dt_control_queue_redraw_center ();

dt_dev_refresh_ui_images (dev );

}

void dt_dev_add_new_history_item (dt_develop_t * dev , dt_iop_module_t * module , gboolean enable )

{

_dev_add_history_item (dev , module , enable , TRUE);

dt_control_queue_redraw_center ();

dt_dev_refresh_ui_images (darktable .develop );

}

void dt_dev_add_masks_history_item_ext (dt_develop_t * dev , dt_iop_module_t * _module , gboolean _enable , gboolean no_image )

{

dt_iop_module_t * module = _module ;

Expand Down

Expand Up

@@ -963,11 +950,6 @@ void dt_dev_add_masks_history_item(dt_develop_t dev, dt_iop_module_t module, g

/ recreate mask list /

dt_dev_masks_list_change (dev );

}

// invalidate buffers and force redraw of darkroom

dt_dev_invalidate_all (dev );

dt_control_queue_redraw_center ();

dt_dev_refresh_ui_images (dev );

}

void dt_dev_free_history_item (gpointer data )

Expand Down

Expand Up

@@ -1040,6 +1022,8 @@ void dt_dev_reload_history_items(dt_develop_t *dev)

dt_dev_modules_update_multishow (dev );

dt_unlock_image (dev -> image_storage .id );

dt_dev_invalidate_all (dev , FUNCTION , FILE , LINE );

}

void dt_dev_pop_history_items_ext (dt_develop_t * dev , int32_t cnt )

Expand Down

Expand Up

@@ -1162,8 +1146,6 @@ void dt_dev_pop_history_items(dt_develop_t *dev, int32_t cnt)

dt_pthread_mutex_unlock (& dev -> history_mutex );

dt_dev_invalidate_all (dev );

dt_dev_masks_list_change (dev );

}

Expand Down

Expand Up

@@ -1949,7 +1931,6 @@ void dt_dev_read_history_ext(dt_develop_t *dev, const int imgid, gboolean no_ima

if (dev -> gui_attached && !no_image )

{

dt_dev_invalidate_all (dev );

/ signal history changed /

dt_dev_undo_end_record (dev );

}

Expand Down

Expand Up

@@ -2002,15 +1983,15 @@ void dt_dev_reprocess_center(dt_develop_t *dev)

// Flush the caches and recompute from scratch

if (darktable .gui -> reset || !dev || !dev -> gui_attached ) return ;

dt_dev_pixelpipe_cache_flush (& (dev -> pipe -> cache ));

dt_dev_invalidate (dev );

dt_dev_invalidate (dev , FUNCTION , FILE , LINE );

}

void dt_dev_reprocess_preview (dt_develop_t * dev )

{

// Flush the caches and recompute from scratch

if (darktable .gui -> reset || !dev || !dev -> gui_attached ) return ;

dt_dev_pixelpipe_cache_flush (& (dev -> preview_pipe -> cache ));

dt_dev_invalidate_preview (dev );

dt_dev_invalidate_preview (dev , FUNCTION , FILE , LINE );

}

void dt_dev_reprocess_all (dt_develop_t * dev )

Expand Down

Expand Up

@@ -2626,7 +2607,7 @@ int dt_dev_sync_pixelpipe_hash(dt_develop_t dev, struct dt_dev_pixelpipe_t pip

// timed out. let's see if history stack has changed

if (pipe -> changed & (DT_DEV_PIPE_TOP_CHANGED | DT_DEV_PIPE_REMOVE | DT_DEV_PIPE_SYNCH ))

{

dt_dev_invalidate (dev );

dt_dev_invalidate (dev , FUNCTION , FILE , LINE );

// pretend that everything is fine

return TRUE;

}

Expand Down

Expand Up

@@ -2723,7 +2704,7 @@ int dt_dev_sync_pixelpipe_hash_distort(dt_develop_t *dev, struct dt_dev_pixelpip

// timed out. let's see if history stack has changed

if (pipe -> changed & (DT_DEV_PIPE_TOP_CHANGED | DT_DEV_PIPE_REMOVE | DT_DEV_PIPE_SYNCH ))

{

dt_dev_invalidate (dev );

dt_dev_refresh_ui_images (dev );

// pretend that everything is fine

return TRUE;

}

Expand Down

8 changes: 3 additions & 5 deletions

8

src/develop/develop.h

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -338,7 +338,6 @@ int dt_dev_is_current_image(dt_develop_t *dev, uint32_t imgid);

const dt_dev_history_item_t * dt_dev_get_history_item (dt_develop_t * dev , const char * op );

void dt_dev_add_history_item_ext (dt_develop_t * dev , struct dt_iop_module_t * module , gboolean enable , gboolean no_image );

void dt_dev_add_history_item (dt_develop_t * dev , struct dt_iop_module_t * module , gboolean enable );

void dt_dev_add_new_history_item (dt_develop_t * dev , struct dt_iop_module_t * module , gboolean enable );

void dt_dev_add_masks_history_item_ext (dt_develop_t * dev , struct dt_iop_module_t * _module , gboolean _enable , gboolean no_image );

void dt_dev_add_masks_history_item (dt_develop_t * dev , struct dt_iop_module_t * _module , gboolean enable );

void dt_dev_reload_history_items (dt_develop_t * dev );

Expand All

@@ -354,10 +353,9 @@ void dt_dev_invalidate_history_module(GList list, struct dt_iop_module_t modul

// force a rebuild of the pipe, needed when a module order is changed for example

void dt_dev_pixelpipe_rebuild (struct dt_develop_t * dev );

void dt_dev_invalidate (dt_develop_t * dev );

void dt_dev_invalidate_preview (dt_develop_t * dev );

// also invalidates preview (which is unaffected by resize/zoom/pan)

void dt_dev_invalidate_all (dt_develop_t * dev );

void dt_dev_invalidate (dt_develop_t * dev , const char * caller , const char * file , const long line );

void dt_dev_invalidate_preview (dt_develop_t * dev , const char * caller , const char * file , const long line );

void dt_dev_invalidate_all (dt_develop_t * dev , const char * caller , const char * file , const long line );

void dt_dev_set_histogram (dt_develop_t * dev );

void dt_dev_set_histogram_pre (dt_develop_t * dev );

void dt_dev_get_history_item_label (dt_dev_history_item_t * hist , char * label , const int cnt );

Expand Down

4 changes: 2 additions & 2 deletions

4

src/develop/imageop.c

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -2861,7 +2861,7 @@ void dt_iop_refresh_center(dt_iop_module_t *module)

dt_develop_t * dev = module -> dev ;

if (dev && dev -> gui_attached )

{

dt_dev_invalidate (dev );

dt_dev_invalidate (dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (dev );

}

}

Expand All

@@ -2872,7 +2872,7 @@ void dt_iop_refresh_preview(dt_iop_module_t *module)

dt_develop_t * dev = module -> dev ;

if (dev && dev -> gui_attached )

{

dt_dev_invalidate_preview (dev );

dt_dev_invalidate_preview (dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (dev );

}

}

Expand Down

2 changes: 1 addition & 1 deletion

2

src/develop/masks/masks.c

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -1977,7 +1977,7 @@ void dt_masks_update_image(dt_develop_t *dev)

// dt_similarity_image_dirty(dev->image_storage.id);

// invalidate buffers and force redraw of darkroom

dt_dev_invalidate_all (dev );

dt_dev_invalidate_all (dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (dev );

}

Expand Down

5 changes: 4 additions & 1 deletion

5

src/develop/pixelpipe_cache.c

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -197,7 +197,10 @@ void dt_dev_pixelpipe_cache_print(dt_dev_pixelpipe_cache_t *cache)

{

for (int k = 0 ; k < cache -> entries ; k ++ )

{

printf ("pixelpipe cacheline %d used %d by %lu\n" , k , cache -> used [k ], cache -> hash [k ]);

if (cache -> hash [k ] == (uint64_t )-1 )

printf ("pixelpipe cacheline %d unused\n" , k );

else

printf ("pixelpipe cacheline %d used %d by %lu\n" , k , cache -> used [k ], cache -> hash [k ]);

}

printf ("cache hit rate so far: %.3f\n" , (cache -> queries - cache -> misses ) / (float )cache -> queries );

}

Expand Down

8 changes: 4 additions & 4 deletions

8

src/gui/actions/display.h

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -24,7 +24,7 @@ static void full_screen_callback()

else

gtk_window_fullscreen (GTK_WINDOW (widget ));

dt_dev_invalidate (darktable .develop );

dt_dev_invalidate (darktable .develop , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (darktable .develop );

/ redraw center view /

Expand Down

Expand Up

@@ -57,7 +57,7 @@ static void _toggle_side_borders_accel_callback(dt_action_t *action)

dt_ui_toggle_panels_visibility (darktable .gui -> ui );

/ trigger invalidation of centerview to reprocess pipe /

dt_dev_invalidate (darktable .develop );

dt_dev_invalidate (darktable .develop , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (darktable .develop );

gtk_widget_queue_draw (dt_ui_center (darktable .gui -> ui ));

}

Expand Down

Expand Up

@@ -251,7 +251,7 @@ static void profile_callback(GtkWidget *widget)

dt_colorspaces_update_display_transforms ();

pthread_rwlock_unlock (& darktable .color_profiles -> xprofile_lock );

DT_DEBUG_CONTROL_SIGNAL_RAISE (darktable .signals , DT_SIGNAL_CONTROL_PROFILE_USER_CHANGED , DT_COLORSPACES_PROFILE_TYPE_DISPLAY );

dt_dev_invalidate_all (darktable .develop );

dt_dev_invalidate_all (darktable .develop , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (darktable .develop );

}

}

Expand Down

Expand Up

@@ -287,7 +287,7 @@ static void intent_callback(GtkWidget *widget)

pthread_rwlock_rdlock (& darktable .color_profiles -> xprofile_lock );

dt_colorspaces_update_display_transforms ();

pthread_rwlock_unlock (& darktable .color_profiles -> xprofile_lock );

dt_dev_invalidate_all (darktable .develop );

dt_dev_invalidate_all (darktable .develop , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (darktable .develop );

}

}

Expand Down

2 changes: 1 addition & 1 deletion

2

src/gui/color_picker_proxy.c

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -214,7 +214,7 @@ static gboolean _color_picker_callback_button_press(GtkWidget *button, GdkEventB

}

// force applying the next incoming sample

self -> changed = TRUE;

dt_dev_invalidate_all (darktable .develop );

dt_dev_invalidate_all (darktable .develop , FUNCTION , FILE , LINE );

}

else

{

Expand Down

6 changes: 3 additions & 3 deletions

6

src/iop/ashift.c

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -3035,7 +3035,7 @@ static int do_get_structure_auto(dt_iop_module_t *module, dt_iop_ashift_params

dt_control_log (_ ("data pending - please repeat" ));

// force to reprocess the preview, otherwise the buffer is ko

dt_dev_pixelpipe_flush_caches (module -> dev -> preview_pipe );

dt_dev_invalidate_preview (module -> dev );

dt_dev_invalidate_preview (module -> dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (module -> dev );

goto error ;

}

Expand Down

Expand Up

@@ -3086,7 +3086,7 @@ static void _do_get_structure_lines(dt_iop_module_t *self)

dt_control_log (_ ("data pending - please repeat" ));

// force to reprocess the preview, otherwise the buffer is ko

dt_dev_pixelpipe_flush_caches (self -> dev -> preview_pipe );

dt_dev_invalidate_preview (self -> dev );

dt_dev_invalidate_preview (self -> dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (self -> dev );

return ;

}

Expand Down

Expand Up

@@ -3133,7 +3133,7 @@ static void _do_get_structure_quad(dt_iop_module_t *self)

dt_control_log (_ ("data pending - please repeat" ));

// force to reprocess the preview, otherwise the buffer is ko

dt_dev_pixelpipe_flush_caches (self -> dev -> preview_pipe );

dt_dev_invalidate_preview (self -> dev );

dt_dev_invalidate_preview (self -> dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (self -> dev );

return ;

}

Expand Down

4 changes: 2 additions & 2 deletions

4

src/iop/basicadj.c

Show comments

View file

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Original file line number

Diff line number

Diff line change

Expand Up

@@ -214,7 +214,7 @@ static void _auto_levels_callback(GtkButton button, dt_iop_module_t self)

}

dt_iop_gui_leave_critical_section (self );

dt_dev_invalidate_all (self -> dev );

dt_dev_invalidate_all (self -> dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (self -> dev );

}

Expand Down

Expand Up

@@ -355,7 +355,7 @@ int button_released(struct dt_iop_module_t *self, double x, double y, int which,

g -> button_down = 0 ;

g -> call_auto_exposure = 1 ;

dt_dev_invalidate_all (self -> dev );

dt_dev_invalidate_all (self -> dev , FUNCTION , FILE , LINE );

dt_dev_refresh_ui_images (self -> dev );

}

else

Expand Down

Oops, something went wrong.

0 comments on commit

Please sign in to comment.

You can’t perform that action at this time.

Merge remote-tracking branch 'upstream/pr/579' into develop

  • upstream/pr/579: Fujifilm FinePix SL1000 support

Codecov Report

Attention: 18 lines in your changes are missing coverage. Please review.

Comparison is base (d119d1d) 58.60% compared to head (2576860) 58.65%.

Files Patch % Lines
src/librawspeed/common/Common.h 48.27% 15 Missing :warning:
src/librawspeed/adt/Array2DRef.h 57.14% 3 Missing :warning:
@@ Coverage Diff @@
## develop #580 +/- ##
===========================================
+ Coverage 58.60% 58.65% +0.05% 
===========================================
 Files 245 246 +1 
 Lines 14268 14324 +56 
 Branches 1951 1952 +1 
===========================================
+ Hits 8362 8402 +40 
- Misses 5787 5803 +16 
 Partials 119 119 
Flag Coverage Δ
benchmarks 8.59% <68.83%> (+0.31%) :arrow_up:
integration 46.80% <0.00%> (-0.19%) :arrow_down:
linux 56.61% <68.49%> (+0.01%) :arrow_up:
macOS 19.19% <90.90%> (+0.16%) :arrow_up:
rpu_u 46.80% <0.00%> (-0.19%) :arrow_down:
unittests 17.60% <31.57%> (-0.04%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Pixelcache: give higher priority to modules lower in the pipeline

These have the most probability to be reused.

ReadTimstamp() returns a long int. Converted to an int it will overflow in 2038.

Personally, I think auto might be a better idea because what if the type changes again in new versions? Using auto will avoid needing another change if anything happens in the future.

I think it's better to be explicit about the data types you are declaring. And if the implementation you are using have changed the return type, this is a good way to be notified about that and make the correct changes in order to keep the behaviour the same as before.

It is also good to be explicit from a code reviewer standpoint.

And if the implementation you are using have changed the return type, this is a good way to be notified about that

Only if we would activate Wconversion warnings, but that would generate a lot of warnings.

It is also good to be explicit from a code reviewer standpoint.

You would still have to check if the data type matches the function, which didn't happen when this code was written. With auto this bug wouldn't have happened.

But yeah, it's also a personal preference and not that important.

And if the implementation you are using have changed the return type, this is a good way to be notified about that

Only if we would activate Wconversion warnings, but that would generate a lot of warnings.

It is also good to be explicit from a code reviewer standpoint.

You would still have to check if the data type matches the function, which didn't happen when this code was written. With auto this bug wouldn't have happened.

But yeah, it's also a personal preference and not that important.

Maybe we should add Wconversion and start to remove conversions that could cause problems. It depends on how we treat the "master" branch. I think it should be a develop branch and when we feel ready we create a release, either a new release branch or just a release on the develop branch.

Sorry, That was the wrong button

Maybe we should add Wconversion and start to remove conversions that could cause problems.

Sure, but that is work for the future.

After checking the sample taken from the RPU I settled on specifying the entire active area (which is larger than what the ADC reports) as I don't see any garbage pixels.

Hmm yes, i don't see hints of garbage pixels either on that sample. Does not mean there are aren't, but impossible to tell on a single sample.

@victoryforce thank you!

Color picker : fix GUI interaction in darkroom

Commit user input on mouse_moved and button_pressed, not on button_pushed.

That's a perk of the new API.

copyPixels(): drop height == 1 specialcase

benchmark says it basically doesn't matter perf-wise, and certainly does not matter above ~64-byte-sized row.

Description of the bug

When dragging a slider to adjust a setting in darkroom, the setting is not applied until the mouse is let go. From what I have tested, this applies to all sliders.

This bug also occurs when right clicking and moving the slider setting.

To Reproduce

  1. Click on global variance slider
  2. Drag slider to increase or decrease setting, but do not let go of mouse button
  3. Error is that the changed setting is not applied to image
  4. Let go of mouse button
  5. Setting is now applied to image

  6. Right click on global variance slider

  7. Move mouse to increase or decrease setting
  8. Error is that the changed setting is not applied to image
  9. Click mouse button
  10. Setting is now applied to image

Expected behavior

While dragging a slider, the image should apply in real-time for immediate feedback of the change's effect.

System

  • darktable version : d825e81
  • OS : Windows 10 (build 19045)
  • Memory : 24576mb
  • Graphics card : AMD Radeon R9 200
  • OpenCL installed : Yes
  • OpenCL activated : Yes

Hi! This change has been, made on purpose. you can right-clic to show the precision popup, and hold the left mouse button clicked inside it while you move the cursor to continuously change the setting. You can also use the mouse wheel while your cursor overlaps the setting.

Hi! This change has been, made on purpose. you can right-clic to show the precision popup, and hold the left mouse button clicked inside it while you move the cursor to continuously change the setting. You can also use the mouse wheel while your cursor overlaps the setting.

Ah thank you. I didn't realise this was a purpose change. Thank you for the clarifying about how I can still see the changes real-time still.

De-implement D-Bus support

  • It crashes with Nextcloud and Pipewire events
  • It is used in only 2 places
  • It adds little value
  • It is not cross-OS
  • I don't have time to debug cosmetics

That.

Nice to haves:

  • Ability to quickly and accurately place nodes at middle grey, highlights (between middle grey and scene white), and shadows (between black and middle grey)
  • Editable splines a la Davinci Resolve. Raw therapee also has this feature, but i believe resolve's ui is better.

Raw therapee:

image

Resolve:

image

Pipeline: remove the cache_obsolete property

Where you use to set that to true, simply call dt_dev_reprocess and cache will be flushed on the spot.

Rework the pipeline vs. GUI synchronization.

Manually trigger pipeline recomputes on GUI events that require it. Ensure redrawings are dispatched before pipe recomputes for better UX.

The previous logic was to hide the pipe recompute into the darkroom expose() function, used as a Gtk callback for Cairo redrawing of the center view. Events that would invalidate the history would set pipe->*_status = DIRTY in a non-thread-safe way, through the dt_dev_invalidate() methods, then request a redraw of the center region, and expose() would listen to that and fire a pipeline at drawing time. That was absolutely useless, because the image was not ready at drawing time, so the same expose() function would have to be connected to the *_PIPE_FINISHED signal and redraw once again then.

It took me a full afternoon to figure out what the fuck did start those computations, and that's the last place I expected it to be, because it makes no sense to rely on GUI events to start background tasks. Namely, if the GUI froze (because of too many user interactions restarting too many pipes), then the pipe would freeze too. That's like all the trouble of multi-threading without any of the benefits.

The present logic is more verbose but makes the 3 main events independent from each others : - commit history and invalidate pipeline if changes, - redraw UI, especially to display the "busy..." message, - spawn pipe recomputations.

More importantly, it allows to have multiple history invalidations at the same time, while only triggering pipeline once, or re-launching it with the kill-switch. Then, the GUI grabs the image backbufer at its own pace, with signals. This makes the whole process more predictable and voids all the nasty workarounds piled up over the years to flush the cache all the time, because we were unable to assert its validity.

Pipeline: do not kill-switch in the middle of a module computation

That would corrupt cache lines and lead to black images.

Pipeline: remove force_reload property and IOP breakpoint

Breakpoint is handled through atomic pipe_shutdown, and so should force_reload the ill-named (it actually forces the pipe to shutdown when closing the app).

Pixelpipe: sanitize the control flow

  • remove all calls to gtk redraw from within history and pipe handling,
  • make all writings to dev->pipe->changed thread-safe

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

darktable-org

/

rawspeed

Public

Notifications

Fork 116

Star 324

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

CR2: read black/white levels from Makernotes #578

Merged

LebedevRI merged 25 commits into

darktable-org :develop

from

LebedevRI :cr2-meta

Dec 4, 2023

Merged

CR2: read black/white levels from Makernotes

578

Show file tree

Hide file tree

Commits

Show all changes

25 commits

Select commit Hold shift + click to select a range

Revert "Cr2Decoder: move ColorData10 WB offset guessing into code"

LebedevRI Dec 2, 2023

Revert "Cr2Decoder: move ColorData9 WB offset guessing into code"

LebedevRI Dec 2, 2023

Cr2Decoder: do include header for isPowerOfTwo

LebedevRI Dec 3, 2023

RawDecoder::setMetaData(): support lack of <Sensor> for camera

LebedevRI Dec 2, 2023

Camera::getSensorInfo(): support no-<Sensor> entries for camera

LebedevRI Dec 2, 2023

Cr2Decoder: deduceColorDataFormat(): return version too

LebedevRI Dec 2, 2023

Cr2Decoder: read black/white levels from ColorData4 ver.4

LebedevRI Dec 2, 2023

Cr2Decoder: read black/white levels from ColorData4 ver.5

LebedevRI Dec 2, 2023

AbstractLJpegDecoder: support querying sample precision

LebedevRI Dec 3, 2023

Cr2Decoder: correctly handle sample precision mismatch for black/whit…

LebedevRI Dec 3, 2023

Cr2Decoder: read black/white levels from ColorData4 ver.6

LebedevRI Dec 3, 2023

Cr2Decoder: sraw interpolation subtracts black level already

LebedevRI Dec 3, 2023

Cr2Decoder: read black/white levels from ColorData4 ver.7

LebedevRI Dec 3, 2023

Cr2Decoder: read black/white levels from ColorData4 ver.9

LebedevRI Dec 3, 2023

Cr2Decoder: read black/white levels from ColorData5 ver.-4

LebedevRI Dec 3, 2023

Cr2Decoder: not sure where ColorData5 ver.-3 stores white level

LebedevRI Dec 3, 2023

Cr2Decoder: read(?) black/white levels from ColorData5 ver.-3

LebedevRI Dec 3, 2023

Cr2Decoder: colorDataVersion=10 might mean either ColorData6 or Color…

LebedevRI Dec 3, 2023

Cr2Decoder: read black/white levels from ColorData6 ver.10

LebedevRI Dec 3, 2023

Cr2Decoder: read black/white levels from ColorData7 ver.10

LebedevRI Dec 3, 2023

Cr2Decoder: read black/white levels from ColorData7 ver.11

LebedevRI Dec 3, 2023

Cr2Decoder: read black/white levels from ColorData8 ver.12

LebedevRI Dec 4, 2023

Cr2Decoder: read black/white levels from ColorData8 ver.13

LebedevRI Dec 4, 2023

Cr2Decoder: read black/white levels from ColorData8 ver.14

LebedevRI Dec 4, 2023

Cr2Decoder: read black/white levels from ColorData4 ver.3

LebedevRI Dec 4, 2023

Clear filters

Failed to load comments.

Jump to file

Failed to load files.

Diff view

Diff view

There are no files selected for viewing

Oops, something went wrong.

Oops, something went wrong.

Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.

You can’t perform that action at this time.

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

darktable-org

/

rawspeed

Public

Notifications

Fork 116

Star 324

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

CR2: read black/white levels from Makernotes #578

Merged

LebedevRI merged 25 commits into

darktable-org :develop

from

LebedevRI :cr2-meta

Dec 4, 2023

Merged

CR2: read black/white levels from Makernotes

578

LebedevRI merged 25 commits into

darktable-org :develop

from

LebedevRI :cr2-meta

Dec 4, 2023

Conversation

This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters

Show hidden characters

Copy link

Member

LebedevRI

commented

Dec 3, 2023

This handles basically everything CR2, except a few of the oldest cameras, it is not obvious if they even store in the .

Fixes #102 Fixes #417

LebedevRI

added 2 commits

December 3, 2023 04:20

Copy link

codecov bot

commented

Dec 3, 2023

Codecov Report

Attention: in your changes are missing coverage. Please review.

Files

Patch %

Lines

src/librawspeed/decoders/Cr2Decoder.cpp

31.86%

61 Missing and 1 partial ⚠️

src/librawspeed/decoders/RawDecoder.cpp

25.00%

9 Missing ⚠️

src/librawspeed/metadata/Camera.cpp

66.66%

1 Missing ⚠️

Flag

Coverage Δ

benchmarks

⬇️

integration

⬇️

linux

⬇️

macOS

⬇️

rpu_u

⬇️

unittests

⬇️

windows

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry . 📢 Have feedback on the report? Share it here .

LebedevRI

added 14 commits

December 3, 2023 23:52

LebedevRI

force-pushed the

cr2-meta

branch from

to

Compare

December 4, 2023 01:06

LebedevRI

changed the title [WIP] CR2: read black/white levels from Makernotes

CR2: read black/white levels from Makernotes

Dec 4, 2023

LebedevRI

mentioned this pull request

Dec 4, 2023

canon EOS M6 White Level for stopped down lens #467

Closed

LebedevRI

force-pushed the

cr2-meta

branch from

to

Compare

December 4, 2023 01:28

LebedevRI

added 9 commits

December 4, 2023 04:41

LebedevRI

force-pushed the

cr2-meta

branch from

to

Compare

December 4, 2023 01:51

Copy link

Member

Author

LebedevRI

commented

Dec 4, 2023

Gone through all of the RPU regression tests once more, all seems to be in order, aside from the 's sRaw/mRaw, where we go from final whitelevel of 16383 to ~58232. It seems like those images are just -2EV underexposed, but i'm not really sure...

I'm just going to merge this now.

LebedevRI

merged commit into

darktable-org :develop

Dec 4, 2023

55 of 57 checks passed

LebedevRI

deleted the

cr2-meta

branch

December 4, 2023 18:17

Sign up for free to join this conversation on GitHub . Already have an account? Sign in to comment

Labels

None yet

1 participant

Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.

You can’t perform that action at this time.

Merge remote-tracking branch 'upstream/pr/578' into develop

  • upstream/pr/578: (25 commits) Cr2Decoder: read black/white levels from ColorData4 ver.3 Cr2Decoder: read black/white levels from ColorData8 ver.14 Cr2Decoder: read black/white levels from ColorData8 ver.13 Cr2Decoder: read black/white levels from ColorData8 ver.12 Cr2Decoder: read black/white levels from ColorData7 ver.11 Cr2Decoder: read black/white levels from ColorData7 ver.10 Cr2Decoder: read black/white levels from ColorData6 ver.10 Cr2Decoder: colorDataVersion=10 might mean either ColorData6 or ColorData7, differentiate them correctly Cr2Decoder: read(?) black/white levels from ColorData5 ver.-3 Cr2Decoder: not sure where ColorData5 ver.-3 stores white level Cr2Decoder: read black/white levels from ColorData5 ver.-4 Cr2Decoder: read black/white levels from ColorData4 ver.9 Cr2Decoder: read black/white levels from ColorData4 ver.7 Cr2Decoder: sraw interpolation subtracts black level already Cr2Decoder: read black/white levels from ColorData4 ver.6 Cr2Decoder: correctly handle sample precision mismatch for black/white levels AbstractLJpegDecoder: support querying sample precision Cr2Decoder: read black/white levels from ColorData4 ver.5 Cr2Decoder: read black/white levels from ColorData4 ver.4 Cr2Decoder: deduceColorDataFormat(): return version too ...

Pixelpipe: change sync : reorder the pathes to ensure synchronization happens only once.

Minor debug strings changes.

GUI : replace calls to dt_dev_reprocess by dt_dev_invalidate

dt_dev_reprocess calls dt_dev_invalidate intenally but flushes all caches. There is no need to do that all the time.

Remove superfluous calls to redraw() since the relevant surfaces to redraw are connected to the signals *_PIPE_FINISHED.

Darkroom : Do not mix GUI drawing with pipeline handling

  • remove the non-thread-safe gui_sync flag set from pipeline compute and used at center view redraw to update modules that need it (seriously, WTF ?). Modules that need updating upon pipeline completion can connect to the signal UI_PIPE_FINISHED.
  • do not start a pipeline recompute from expose() which is meant to draw. The nasty hack that has been abused in dt is to set dev->preview_status == DT_DEV_PIXELPIPE_DIRTY and then call a Gtk redraw over the center view from history handling. The GTK drawing callback then read dev->preview_status without using thread locks. Then it would capture again the signal UI_PIPE_FINISHED.

In addition of not being thread-safe, these add the latencies of pipe and GUI threads on top of each other twice, delaying pipe recomputes if the GUI thread is busy, and the other way around.

Really, really bad code and a tangled mess.

Merge remote-tracking branch 'upstream/pr/487' into develop

  • upstream/pr/487: Pentax K-3 Mark III Monochrome support

I tried to backport some fixes. It build and run on my system, but I hardly use this program to process non-RAW images, so if other people get error with this, please @ me.

Hm, please don't merge this PR until I finished changes on desktop entry, because I am planning move desktop entry related changes here into another PR, thanks!

Hm, please don't merge this PR until I finished changes on desktop entry, because I am planning move desktop entry related changes here into another PR, thanks!

I removed CMake/desktop entry related commit from this PR (and submit another PR for those), this should have no conflict with master now.

Thanks !

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues 0 Security Hotspots No data about Coverage 0.3% Duplication on New Code

See analysis details on SonarCloud

Cr2Decoder: read black/white levels from ColorData4 ver.3

Refs. https://github.com/exiftool/exiftool/issues/233

Cr2Decoder: colorDataVersion=10 might mean either ColorData6 or ColorData7, differentiate them correctly

Cr2Decoder: read(?) black/white levels from ColorData5 ver.-3

I think the 662 is the SpecularWhiteLevel, but i'm not 100% sure 264 is the per-channel black levels and not average black levels.

Refs. https://github.com/exiftool/exiftool/issues/232

WHAT IS THIS FOR?

This prevent the user to create folders and files with names containing whitespace characters at the beginning and the end. Those characters are usually written by mistake and leads to bugs in some backup programs for example.

HOW DOES IT WORK?

The PR insert a function just before saving the user's path syntax (in the Import and Export window) to remove those whitespace characters using g_strstrip().

Quality Gate Failed Quality Gate failed

Failed conditions

4.7% Duplication on New Code (required ≤ 3%)

See analysis details on SonarCloud

CMake: Refactor Lua part to use the same logic with LibRaw.

Which means we have USE_BUNDLED_LUA=ON instead of DONT_USE_INTERNAL_LUA=ON by default, this should not be a huge change because we don't build with Lua in CI.

CMake: Add USE_BUNDLED_LIBRAW

By default this is enabled so we could still control the version in CI build, but this option will allow packagers to use their system-wide LibRaw instead of the bundled one.

Cr2Decoder: read black/white levels from ColorData4 ver.4

Anything earlier than that does not seem to provide white level, at least not as a complete value.

Cr2Decoder: correctly handle sample precision mismatch for black/white levels

In Canon MakerNotes, the levels are always unscaled, and are 14-bit, and so if the LJpeg precision was lower, we need to adjust.

The release workflow which also runs every night to create a pre-release is currently failing: https://github.com/Exiv2/exiv2/actions/workflows/release.yml

I'm adding a badge to the readme to make this more visible. Per GitHub docs I'm the only one who receives a notification about this as I'm the one who created the workflow. I'm no longer actively contributing to this project, though. To change the notifications, I'll disable the workflow in the hope that one of the active maintainers will re-enable the workflow which will make them the person that will receive notifications about future failures.

Review these changes using an interactive CodeSee Map

Legend

develop: replace internal cumulative hash with fetching global hash in legacy functions

Nuke the shortcuts tab in preferences popup

Not going to debug that shit. Key accels will be replaced by native Gtk accels soon.

Also tried to make the Lua code cleaner, it could build with both values on my system, but I personally never use Lua scripts, if there are users who find real problems with my changes, please @ me, I'll try to fix.

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 0 Code Smells

No Coverage information No Coverage information 1.8% 1.8% Duplication

Would anyone mind telling me if lua scripts are currently unsupported and the following is expected? I'm using the ansel-git package from the aur, and when running ansel -d lua the following error is returned:

0.751038 LUA ERROR : dt_lua_event_add: wrong number of args for post-import-film, expected 3, got 4
0.789819 LUA ERROR : /home/clu/.config/ansel/lua/tools/script_manager.lua:57: module 'darktable.debug' not found:

no field package.preload['darktable.debug']

no file '/home/clu/.lua/darktable/debug.lua'

no file '/usr/share/lua/5.4/darktable/debug.lua'

no file '/usr/share/lua/5.4/darktable/debug/init.lua'

no file '/usr/lib/lua/5.4/darktable/debug.lua'

no file '/usr/lib/lua/5.4/darktable/debug/init.lua'

no file './darktable/debug.lua'

no file './darktable/debug/init.lua'

no file '/usr/share/ansel/lua/darktable/debug.lua'

no file '/home/clu/.config/ansel/lua/darktable/debug.lua'

no file '/usr/lib/lua/5.4/darktable/debug.so'

no file '/usr/lib/lua/5.4/loadall.so'

no file './darktable/debug.so'

no file '/usr/lib/lua/5.4/darktable.so'

no file '/usr/lib/lua/5.4/loadall.so'

no file './darktable.so'

It appears script_manager.lua is looking for darktable binaries? Which leads me to believe lua scripts are not supported yet, but I am not a programmer and clarification would be helpful.

@vredesbyyrd Are you using latest commit? Try re-run the command that you build the AUR package and it will fetch and build the latest commit.

@AlynxZhou

I rebuilt just to be sure, but same error persists. I did realize importing some lua scripts directly from luarc works, eg:

require "official/image_path_in_ui"

Some others still fail. OpenInExplorer.lua errors with:

0.765140 LUA ERROR : /home/clu/.config/ansel/lua/contrib/OpenInExplorer.lua:203: field "image" not found for type dt_lua_singleton_lib

CI: Bump dependencies versions

Exiv2 v0.28.1 works fine on Linux.

aom v3.7.1 is the latest stable release so use it instead of master.

CI: Fix heif support by manually build libheif for AppImage

Ubuntu 22.04 has libheif 1.12, but we need 1.13 or later, that makes AppImage we build doesn't have heif support. To fix that, manually build and install libheif in CI so we could bundle new version we need.

Description of the bug

When exiting Darkroom and returning to Lighttable, the scroll will be set such that the edited image appears on the first row of thumbnails, regardless of where the scroll was previously. Furthermore, there won't be any scroll bar shown to indicate the position in the scroll list, which is confusing.

To Reproduce

  1. Open a collection in Lighttable. Ensure more than 1 row of thumbnails are present but not enough to cause a scroll bar to appear.
  2. Open an image from the 2nd row in Darkroom.
  3. Close Darkroom to return to Lighttable.
  4. The top row of shown thumbnails will actually be row 2 of the collection, yet there will be no scroll bar.
  5. Scrolling with the mouse wheel returns the view to its proper state.

Expected behavior

The scroll location should not change when re-entering Lighttable, and/or a scroll bar should be shown. For a while I thought Lighttable had fully bugged out and was showing the wrong set of images, because there is no visual indication that you're not starting at row 1 of the collection.

Context

https://github.com/aurelienpierreeng/ansel/assets/25008419/6cf92407-1465-45f9-aace-7b90d74b6f55

Which commit introduced the error

Not sure which commit exactly, but it occurs on 1d3f83d.

System

  • darktable version : e.g. 3.5.0+250~gee17c5dcc
  • OS : Win10, version 22H2, build 19045.3693
  • Memory : 8GB, 2133 MHz
  • Graphics card : Nvidia GeForce 920MX
  • Graphics driver : GeForce game ready driver version 537.42
  • OpenCL installed : unknown
  • OpenCL activated : Yes
  • GTK+ : unknown
  • gcc : unknown
  • cflags : unknown
  • CMAKE_BUILD_TYPE : unknown

Additional context

  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? Yes
  • Do you use lua scripts? No

Well done on the detailled description of this issue. The rewrite of the Lighttable is still on the TODO. So chances are we are going to see different behaviour then.

Given that the darkroom will open the thumbnail selected in lighttable, and that this selection can change in darkroom using the filmroll, this is the right thing to do, intuitive or not : it keeps consistency between views. Who cares about where we used to be scrolled at ?

Also use new version of checkout hooks.

Update Exiv2 to 0.28.1, it is OK on Linux.

Use stable version of aom instead of master.

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 41 Code Smells

No Coverage information No Coverage information 1.8% 1.8% Duplication

Thanks !

Pixelpipe: init the global hash with pipe->image.id

pipe->output_imgid is set at completion.

Pipeline: look for the last active module in pipe, not just the last, to fetch final hash

Pipeline: do not bypass computations based on GUI events

It's brittle and adds complexity. If GUI events need to bypass stuff, they need to amend the integrity hash.

All the pipe should care about is looking for cache lines matching hashes. That is modular and maintainable.

De-implement the fast pipe mode

The fast pipe mode is a hack by someone who didn't know that the mask API already has everything needed to slap a mask preview onto the end of the pipeline while bypassing al the intermediate steps.

The last thing the pipe needs is to handle another special case.

Revert "Cr2Decoder: move ColorData10 WB offset guessing into code"

Does not actually happen on any of .CR2 RPU samples.

This reverts commit d8d3a5d97c1cfe38d636e98f4dd5c8af079c5a24.

Revert "Cr2Decoder: move ColorData9 WB offset guessing into code"

Does not actually happen on any of .CR2 RPU samples.

This reverts commit 3e3254aa6a46ac4671f6f2279dcd097a25745025.

Merge remote-tracking branch 'upstream/pr/577' into develop

  • upstream/pr/577: Cr2Decoder: scale cameras.xml black/white levels after interpolation

Merge remote-tracking branch 'upstream/pr/576' into develop

  • upstream/pr/576: LJpeg: "ban" point transform

As far as i can tell, in canon makernotes, the black/white levels are always specified for 14-bit scale, while in cameras.xml they are in their final scale.

This is inconsistent and makes stuff complicated, let's just upscale them afterwards, thus removing the divergence.

Cr2Decoder: scale cameras.xml black/white levels after interpolation

As far as i can tell, in canon makernotes, the black/white levels are always specified for 14-bit scale, while in cameras.xml they are in their final scale.

This is inconsistent and makes stuff complicated, let's just upscale them afterwards, thus removing the divergence.

crop: add a missing quad_toggle declaration for bauhaus combobox

fix #240

Import : implement metadata lookup into library

  • show the path of the first import for imported images,
  • implement a "select new" button

I was looking at the blac/white level reading for CR2, and that apparently requires scaling said levels with the frame precision, and then i have noticed the point transform, and that we apparently never (as of RPU sample set at least) encounter the case where it's actually active.

Looking at the spec, the thing that gives me a pause is whether the change to the initial predictor all that is to it, or do we need to also scale the decoded data?

Until such case is hit, let's single it out.

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (3e42f6c) 58.72% compared to head (08ee383) 58.72%.

Files Patch % Lines
...librawspeed/decompressors/AbstractLJpegDecoder.cpp 50.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #576 +/- ##
===========================================
- Coverage 58.72% 58.72% -0.01% 
===========================================
 Files 245 245 
 Lines 14180 14182 +2 
 Branches 1939 1940 +1 
===========================================
+ Hits 8327 8328 +1 
- Misses 5735 5736 +1 
 Partials 118 118 
Flag Coverage Δ
benchmarks 8.32% <0.00%> (-0.01%) :arrow_down:
integration 47.03% <50.00%> (+<0.01%) :arrow_up:
linux 56.70% <50.00%> (-0.01%) :arrow_down:
macOS 19.12% <0.00%> (-0.01%) :arrow_down:
rpu_u 47.03% <50.00%> (+<0.01%) :arrow_up:
unittests 17.75% <0.00%> (-0.01%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

LJpeg: "ban" point transform

I was looking at the blac/white level reading for CR2, and that apparently requires scaling said levels with the frame precision, and then i have noticed the point transform, and that we apparently never (as of RPU sample set at least) encounter the case where it's actually active.

Looking at the spec, the thing that gives me a pause is whether the change to the initial predictor all that is to it, or do we need to also scale the decoded data?

Until such case is hit, let's single it out.

metadata: already_imported : return the imgid of the image if found, not just a boolean

Counterclockwise round arrow icon no longer switches between portrait & landscape crop

6daff28

Windows11

Merge pull request #575 from LebedevRI/cr2-meta

Cr2Decoder: move WB offset guessing into code

CMake: Manually check version to support libavif 1.0.0

libavif released their first stable version, but its config-version.cmake requires major version to be exactly matched, to support libavif >= 0.8.2, we have to check version manually.

libavif released their first stable version, but its config-version.cmake requires major version to be exactly matched, to support libavif >= 0.8.2, we have to check version manually.

I tested on my system that Ansel could built with libavif 1.0.1, and this does not break building with libavif 0.11.1 on CI.

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 3 Code Smells

No Coverage information No Coverage information 1.8% 1.8% Duplication

Codecov Report

Attention: 29 lines in your changes are missing coverage. Please review.

Comparison is base (c4b88a4) 58.78% compared to head (6229823) 58.72%.

Files Patch % Lines
src/librawspeed/decoders/Cr2Decoder.cpp 45.28% 29 Missing :warning:
@@ Coverage Diff @@
## develop #575 +/- ##
===========================================
- Coverage 58.78% 58.72% -0.07% 
===========================================
 Files 245 245 
 Lines 14136 14180 +44 
 Branches 1935 1939 +4 
===========================================
+ Hits 8310 8327 +17 
- Misses 5708 5735 +27 
 Partials 118 118 
Flag Coverage Δ
benchmarks 8.32% <0.00%> (-0.03%) :arrow_down:
integration 47.03% <46.15%> (-0.04%) :arrow_down:
linux 56.70% <46.15%> (-0.07%) :arrow_down:
macOS 19.12% <0.00%> (-0.05%) :arrow_down:
rpu_u 47.03% <46.15%> (-0.04%) :arrow_down:
unittests 17.75% <0.00%> (-0.06%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Cr2Decoder: move ColorData5 WB offset guessing into code

This also fixes WB reading for Canon PowerShot S100V.

Pixelpipe cache: handle pipe events in hash computing

Account for : - fast pipe, - color picker & histogram requests, - mask preview requests

.. in a way that doesn't require cache invalidation from GUI events.

Increase the number of cache lines to 64.

CI: Manually build exiv2 0.27 for Windows package

Exiv2 0.28 dropped the old std::wstring path and breaks non-ASCII path handling in Windows. While recent Windows supports using UTF-8 as locale, it might cause other issue.

Before actual fix in getting merged, we stick to exiv2 0.27 branch for Windows build.

CI: Use GitHub ref name as branch for macOS

Like what we did for Linux CI, so we could test other branches.

Also run all CI from selected branch instead of master, previous this is done for Linux CI and it seems harmless.

Related: #231

Thanks a lot !

The only thing confusing on building on Windows is I need to install built libraries to ${MINGW_PREFIX}, that's where MingW put its packages, and then CPack of CMake could find my library.

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 0 Code Smells

No Coverage information No Coverage information 1.8% 1.8% Duplication

Fixes: fb932b985123 ("image.h : API cleaning, remove unused parameter")

without this compilation fails with USE_LUA=ON

I don't really get why a one liner would impact code duplication that way.

Sorry, I didn't see the PR before fixing it myself.

It's ok, it is trivial anyway.

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 0 Code Smells

No Coverage information No Coverage information 4.9% 4.9% Duplication

idea Catch issues before they fail your Quality Gate with our IDE extension sonarlint SonarLint

Description of the bug

Today I have updated to 0.0.0+494~g9e5c5285b on Ubuntu 22.04. I modified one photo and when I switched back to Lightroom, the image is black. If I go back again to Darkroom, the image is processed correctly.

To Reproduce

  1. Go to 'Lightroom'
  2. Double click on one photo to open 'Darkroom'
  3. Apply 'crop'
  4. Close 'Darkroom'

Expected behavior

Image cropped correctly also in Lightoom view

Context

image_plus_xmp.zip

Screenshots immagine

Which commit introduced the error

a95375f5780fb953492ed0746ecc3d59bc2af1b2 is the first bad commit

System

  • darktable version : e.g. 3.5.0+250~gee17c5dcc
  • OS : Linux 6.2.0-37-generic #38~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC x86_64 x86_64 x86_64 GNU/Linux
  • Linux - Distro : Ubuntu 22.04
  • Memory : 16 GB
  • Graphics card : Radeon HD 7000
  • Graphics driver : AMD
  • OpenCL installed : ROCr
  • OpenCL activated : unknown
  • Xorg : unknown
  • Desktop : wayland
  • GTK+ : 3.24.33
  • gcc : 12.3.0
  • cflags : unknown
  • CMAKE_BUILD_TYPE : Release

Additional context - Are the steps above reproducible with a fresh edit (i.e. after discarding history)? YES - Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? YES

Yep, I know. WIP

Fixed by 69ceae1b44835c6612a814508a8c3e097b10162f

pipeline cache : simplify the code, fix inconsistent states ?

The pipe cache code was a tangled mess taking input from GUI (active module) and pipeline. That's prone to errors. Plus it had a couple of smart-ass perks (ROI-independent cache ? what for ?).

Changes made :  - each module gets a global hash representing its internal params, blending params, previous modules and ROI size. - this hash is attributed at pipeline (re)construction and independant from GUI events. It gets recomputed on the whole pipeline. - this hash is used as a key to fetch cache lines.

Any upstream change in a module will automatically be propagated to the next hashes. Therefore, we only need to find the latest valid hash (starting from the end of the pipeline) and start recomputing from there. No need to track what module the user was tingling when recomputation was requested.

dbus: protects against double free

Doesn't solve the issue that import jobs calls dt_debus_destroy more than once, apparently.

exif.cc : write_xmp : warn user if the XMP to write weighs > 1 Mo

If writing XMP on import, that can make import hang for a long time.

AppImage stores pre-built OpenCL kernels in KERNEL DIRECTORY: /tmp/.mount_Ansel-JKZOKO/usr/share/ansel/kernels. This is not persistent across restarts, so the kernels have to be rebuilt everytime. This introduces a significant lag in loading time.

Find a way to store kernels in .cache/ansel/kernels as with the vanilla variant.

https://askubuntu.com/questions/1009888/how-does-an-appimage-persist-settings-between-launches states that starting /some/path/to/my.AppImage --appimage-portable-config creates a persistent config director across restarts. The equivalent for cache is not implemented.

As a workaround, in src/common/file_location.c:198, make the use of g_get_user_cache_dir() optional and create an user-defined config key to force a cache dir.

The weird thing is AppImage is able to use the default, generic system-wise config directory, which is persistent (no issue on that side).

I think the reason is each time AppImage extract itself to different dirs, and the map between .cl path and cache is invalid because extract path is changed?

Something like that, yes.

The first solution comes up in my head is to release those .cl files into some user folder like .local/share/ansel/kernels, and try to load from there first, then fallback to /usr/share/ansel/kernels. However, the problem is either release files on every startup, or you have to create a method to update .local/share/ansel/kernels when you have new version, or only enable this function when building AppImage with a build option.

I then have a better idea, instead of using paths to map to cache, what about write some version files in the kernels dir, like write git commit hash or generate a random string by CMake on building into version.txt, and use it to create map to cache. So the same build will always use the same cache.

I am not sure which one is better and easier to implement, but I just write suggestion I could come up for you 😸️

I then have a better idea, instead of using paths to map to cache, what about write some version files in the kernels dir, like write git commit hash or generate a random string by CMake on building into version.txt, and use it to create map to cache. So the same build will always use the same cache.

Won't work. If you start the same AppImage twice, you get 2 differents temporary dir pathes with random names. The only hope here is to store the cache somewhere immutable, which might need to make it user-configurable. Config dir works, it's just that cache dir doesn't seem to be fully implemented with AppImage stack.

Won't work. If you start the same AppImage twice, you get 2 differents temporary dir pathes with random names.

Did you mean OpenCL cache is not controlled by ourselves so it must use path as key to map to caches?

export module : move it nto a popup

out of the way, yet accessible from all views by the global menu.

Import popup : implement direct open on double click on file

If file is image, open it in darkroom straight away on keyboard enter or double click.

Caveat is if we are already in darkroom, it goes back to lighttable because it's the only way to reset the image cache read/write locks without segfault.

gtk: use GTK_STATE_FLAG type

appease static analysis.

please add crw tag "JpgFromRaw" support to get bigger preview image for cr3 image

exiv2 process crw tag ThumbnailImage 0x2008

according to exiftool, the tagid of the full size preview image for CRW is 0x2007 canon_raw

Thank you very much for your help

exiv2 -pp and exiv2 -ep can already extract the 0x2007 preview via these Exif.Image2 tags and is thus exposed through the standard preview API. Does that cover your needs?

@kmilos Thank you very much for your reply, but I found that my CR3 file cannot read the preview image "0x2007" of CRW decoder because CR3 uses BMFF decoder parseCr3Preview only get small size of preview image (usually 1620 * 1080)

Ok, so nothing to do w/ CRW actually... You want the full size JPEG from CR3 mdat: https://github.com/lclevy/canon_cr3

Currently only THMB and PRVW are read.

Merge pull request #574 from LebedevRI/ci

[CI] LLVM17 migrated to debian testing from unstable

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (2d82e3e) 58.78% compared to head (f50e72c) 58.78%.

@@ Coverage Diff @@
## develop #574 +/- ##
========================================
 Coverage 58.78% 58.78% 
========================================
 Files 245 245 
 Lines 14136 14136 
 Branches 1935 1935 
========================================
 Hits 8310 8310 
 Misses 5708 5708 
 Partials 118 118 
Flag Coverage Δ
benchmarks 8.35% <ø> (ø)
integration 47.06% <ø> (ø)
linux 56.76% <ø> (ø)
macOS 19.17% <ø> (ø)
rpu_u 47.06% <ø> (ø)
unittests 17.81% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Revert "theme : re-apply recent changes, fix merge commit fuckup"

This reverts commit f4121250fa058e2bcf7ba4abfabbb212186c9714.

Merge pull request #573 from LebedevRI/wip

A few more cleanups based on Sonar feedback

Codecov Report

Attention: 6 lines in your changes are missing coverage. Please review.

Comparison is base (be7bc5c) 58.78% compared to head (6156a9e) 58.78%.

Files Patch % Lines
src/librawspeed/decompressors/JpegDecompressor.cpp 0.00% 4 Missing :warning:
src/utilities/rstest/MD5Benchmark.cpp 0.00% 2 Missing :warning:
@@ Coverage Diff @@
## develop #573 +/- ##
========================================
 Coverage 58.78% 58.78% 
========================================
 Files 245 245 
 Lines 14137 14136 -1 
 Branches 1935 1935 
========================================
 Hits 8310 8310 
+ Misses 5709 5708 -1 
 Partials 118 118 
Flag Coverage Δ
benchmarks 8.35% <0.00%> (+<0.01%) :arrow_up:
integration 47.06% <0.00%> (+<0.01%) :arrow_up:
linux 56.76% <0.00%> (+<0.01%) :arrow_up:
macOS 19.17% <0.00%> (+<0.01%) :arrow_up:
rpu_u 47.06% <0.00%> (+<0.01%) :arrow_up:
unittests 17.81% <0.00%> (+<0.01%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

pipeline: functions are not mutually-exclusive, which makes no sense.

fix #234 for now.

Will need a real fix later because we lose at least 0.1 s for nothing on each param change.

Merge pull request #572 from LebedevRI/wip

Some misc cleanups

JpegDecompressor: always compile jpeg_mem_src-internal

Code that is never compiled is dead code that will break :/

UncompressedDecompressor::decode12BitRawUnpackedLeftAligned(): avoid C-style casts

colorin: use a message box to warn users about unfound input color profile

This is a serious-enough issue to disturb user's workflow and force them to update the ICC file path if possible. A subtle hint in the log bubble goes unnoticed most of the time and doesn't help.

Codecov Report

Attention: 33 lines in your changes are missing coverage. Please review.

Comparison is base (7e13cd7) 58.69% compared to head (e8acb9c) 58.78%.

Files Patch % Lines
src/librawspeed/decompressors/JpegDecompressor.cpp 0.00% 18 Missing :warning:
...rc/librawspeed/decompressors/LJpegDecompressor.cpp 66.66% 7 Missing :warning:
src/librawspeed/common/BayerPhase.h 0.00% 2 Missing :warning:
...awspeed/decompressors/UncompressedDecompressor.cpp 60.00% 2 Missing :warning:
...awspeed/decompressors/UncompressedDecompressor.cpp 0.00% 1 Missing :warning:
src/librawspeed/codes/HuffmanCode.h 0.00% 1 Missing :warning:
src/librawspeed/decoders/DngDecoder.cpp 50.00% 1 Missing :warning:
src/utilities/rstest/MD5Benchmark.cpp 0.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #572 +/- ##
===========================================
+ Coverage 58.69% 58.78% +0.08% 
===========================================
 Files 244 245 +1 
 Lines 14042 14137 +95 
 Branches 1935 1935 
===========================================
+ Hits 8242 8310 +68 
- Misses 5682 5709 +27 
 Partials 118 118 
Flag Coverage Δ
benchmarks 8.35% <8.33%> (-0.06%) :arrow_down:
integration 47.06% <38.98%> (-0.30%) :arrow_down:
linux 56.76% <44.26%> (-0.03%) :arrow_down:
macOS 19.16% <13.33%> (+0.37%) :arrow_up:
rpu_u 47.06% <38.98%> (-0.30%) :arrow_down:
unittests 17.81% <3.33%> (+0.32%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Partially revert commit 501c09a95ef7cda6cdbb02c29af5c322caef4a9d.

Sonar complains on final methods in final classes.

Review these changes using an interactive CodeSee Map

Legend

lolwut?

By mistake, wanted the reverse merge

Description of the bug

When using a second instance of a processing module, the changes made in that instance don't change the picture.

To Reproduce

  1. Open an image in darkroom
  2. Make a drastic change in Color Balance RGB
  3. See the image change
  4. Make a second instance of Color Balance RGB
  5. Make another drastic change
  6. Nothing happens

System

  • darktable version : 1d3f83d
  • OS : Linux
  • Linux - Distro : Pop OS
  • Memory : 16gb
  • Graphics card : -
  • Graphics driver : -
  • OpenCL installed : -
  • OpenCL activated : -

Additional context

  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes
  • Global menu > Run > Clear all Pipeline Cache doesn't help

I just found out that the changes in step 5 of the list above will get applied in Lighttable once I exit the Darkroom. On re-entering the Darkroom the second instance behaves normally - changes are visible directly in the preview. However making a new, third instance shows the same problem.

Load attached image. Using the below code, an illegal value of zero is returned from orientation.


Exiv2::ExifData::const_iterator md = Exiv2::orientation(exifData);
 if (md != exifData.end() && md->typeId() != Exiv2::undefined)
 {
#ifdef _DEBUG
 if (md->toInt64() == 0)
 {
 auto o = md->toString();
 __debugbreak(); // this is executed. String is "0"
 }
#endif
 _imageTransform = exif2Qt(md->toInt64());
 }

IMG_0675

It is actually 0 in the file, so I don't think this an exiv2 bug.

Exiftool also reports Orientation : Unknown (0).

Merge pull request #571 from LebedevRI/wip

Some misc cleanups

RawImageData::getByteDataAsUncroppedArray2DRef(): avoid having default for fully-covered switch

IiqDecoder::CorrectQuadrantMultipliersCombined(): avoid having default for fully-covered switch

Got a weird one here. When I try to python -c 'import torchaudio' in a particular venv, I get:

terminate called after throwing an instance of 'std::regex_error' 
 what(): Invalid character class.

Thread 1 "python" received signal SIGABRT, Aborted.

GDB seems to point the finger at LensFun. Here's the shortened backtrace leaving out all the PyEval/PyObject stuff and showing the error originating from the lensfun.so.2 so when TorchAudio tries to load libtorchaudio_ffmpeg6.so.

gef➤ bt
#0 __pthread_kill_implementation (threadid=, signo=signo@entry=0x6, no_tid=no_tid@entry=0x0) at pthread_kill.c:44
#1 0x00007ffff78ac8a3 in __pthread_kill_internal (signo=0x6, threadid=) at pthread_kill.c:78
#2 0x00007ffff785c668 in __GI_raise (sig=sig@entry=0x6) at ../sysdeps/posix/raise.c:26
#3 0x00007ffff78444b8 in __GI_abort () at abort.c:79
#4 0x00007fff90a9ca6f in __gnu_cxx::__verbose_terminate_handler() () at /usr/src/debug/gcc/gcc/libstdc++-v3/libsupc++/vterminate.cc:95
#5 0x00007fff90ab011c in __cxxabiv1::__terminate(void (*)()) (handler=) at /usr/src/debug/gcc/gcc/libstdc++-v3/libsupc++/eh_terminate.cc:48
#6 0x00007fff90ab0189 in std::terminate() () at /usr/src/debug/gcc/gcc/libstdc++-v3/libsupc++/eh_terminate.cc:58
#7 0x00007fff90ab03ed in __cxxabiv1::__cxa_throw(void*, std::type_info*, void (*)(void*)) (obj=, tinfo=0x7fff90c6d3e8 , dest=0x7fff90ae0ca0 )
 at /usr/src/debug/gcc/gcc/libstdc++-v3/libsupc++/eh_throw.cc:98
#8 0x00007ffea5437801 in std::__throw_regex_error(std::regex_constants::error_type, char const*) (__ecode=std::regex_constants::_S_error_collate, __what=0x7ffea541c472 "Invalid character class.")
 at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_error.h:173
#9 0x00007ffea5442304 in std::__detail::_BracketMatcher, false, false>::_M_add_character_class(std::__cxx11::basic_string, std::allocator > const&, bool)
 (this=0x7fffffff3bf0, __s=, __neg=) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.h:489
#10 std::__detail::_Compiler >::_M_expression_term(std::__detail::_Compiler >::_BracketState&, std::__detail::_BracketMatcher, false, false>&)
 (this=0x7fffffff4ba0, __matcher=..., __last_char=) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:535
#11 std::__detail::_Compiler >::_M_insert_bracket_matcher(bool) (this=0x7fffffff4ba0, __neg=) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:419
#12 std::__detail::_Compiler >::_M_bracket_expression() (this=0x7fffffff4ba0) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:350
#13 0x00007ffea543ae9d in std::__detail::_Compiler >::_M_atom() (this=0x7fffffff4ba0) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:336
#14 std::__detail::_Compiler >::_M_term() (this=0x7fffffff4ba0) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:133
#15 std::__detail::_Compiler >::_M_alternative() (this=0x7fffffff4ba0) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:115
#16 0x00007ffea54399a9 in std::__detail::_Compiler >::_M_alternative() (this=0x7fffffff4ba0) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:118
#17 0x00007ffea54399a9 in std::__detail::_Compiler >::_M_alternative() (this=0x7fffffff4ba0) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:118
#18 0x00007ffea54399a9 in std::__detail::_Compiler >::_M_alternative() (this=0x7fffffff4ba0) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:118
#19 0x00007ffea54399a9 in std::__detail::_Compiler >::_M_alternative() (this=0x7fffffff4ba0) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:118
#20 0x00007ffea5436b59 in std::__detail::_Compiler >::_M_disjunction() (this=0xe3632) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:91
#21 0x00007ffea543617c in std::__detail::_Compiler >::_Compiler(char const*, char const*, std::locale const&, std::regex_constants::syntax_option_type)
 (this=0x7fffffff4ba0, __b=, __e=, __loc=..., __flags=std::regex_constants::_S_ECMAScript) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex_compiler.tcc:76
#22 std::__cxx11::basic_regex >::_M_compile(char const*, char const*, std::regex_constants::syntax_option_type)
 (this=0x7ffea5463260 , __first=, __last=, __f=std::regex_constants::_S_ECMAScript) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex.h:809
#23 0x00007ffea5430708 in std::__cxx11::basic_regex >::basic_regex(char const*, std::regex_constants::syntax_option_type)
 (this=0x7ffea5463260 , __p=0xe3632 , __f=std::regex_constants::_S_ECMAScript) at /usr/lib64/gcc/x86_64-pc-linux-gnu/13.2.1/../../../../include/c++/13.2.1/bits/regex.h:473
#24 __cxx_global_var_init () at /usr/src/debug/lensfun-git/lensfun/libs/lensfun/lens.cpp:159
#25 0x00007ffea5430708 in _GLOBAL__sub_I_lens.cpp () at /usr/lib/liblensfun.so.2
#26 0x00007ffff7fceeee in call_init (env=0x5555557171f0, argv=0x7fffffffce48, argc=0x3, l=) at dl-init.c:90
#27 call_init (l=, argc=0x3, argv=0x7fffffffce48, env=0x5555557171f0) at dl-init.c:27
#28 0x00007ffff7fcefdc in _dl_init (main_map=0x55555c1ab350, argc=0x3, argv=0x7fffffffce48, env=0x5555557171f0) at dl-init.c:137
#29 0x00007ffff7fcb56e in __GI__dl_catch_exception (exception=exception@entry=0x0, operate=operate@entry=0x7ffff7fd58d0 , args=args@entry=0x7fffffff4f40) at dl-catch.c:211
#30 0x00007ffff7fd5876 in dl_open_worker (a=a@entry=0x7fffffff50e0) at dl-open.c:810
#31 0x00007ffff7fcb4e1 in __GI__dl_catch_exception (exception=exception@entry=0x7fffffff50c0, operate=operate@entry=0x7ffff7fd57e0 , args=args@entry=0x7fffffff50e0) at dl-catch.c:237
#32 0x00007ffff7fd5bec in _dl_open
 (file=0x7ffedeaae850 "/home/jeff/envs/python/virtualenvs/text-generation-webui/lib/python3.10/site-packages/torchaudio/lib/libtorchaudio_ffmpeg6.so", mode=, caller_dlopen=0x7ffff4ca8b0d , nsid=, argc=0x3, argv=0x7fffffffce48, env=0x5555557171f0) at dl-open.c:886
#33 0x00007ffff78a69ec in dlopen_doit (a=a@entry=0x7fffffff5350) at dlopen.c:56
#34 0x00007ffff7fcb4e1 in __GI__dl_catch_exception (exception=exception@entry=0x7fffffff52b0, operate=0x7ffff78a6990 , args=0x7fffffff5350) at dl-catch.c:237
#35 0x00007ffff7fcb603 in _dl_catch_error (objname=0x7fffffff5308, errstring=0x7fffffff5310, mallocedp=0x7fffffff5307, operate=, args=) at dl-catch.c:256
#36 0x00007ffff78a64f7 in _dlerror_run (operate=operate@entry=0x7ffff78a6990 , args=args@entry=0x7fffffff5350) at dlerror.c:138
#37 0x00007ffff78a6aa1 in dlopen_implementation (dl_caller=, mode=, file=) at dlopen.c:71
#38 ___dlopen (file=, mode=) at dlopen.c:81
#39 0x00007ffff4ca8b0d in py_dl_open (self=, args=) at /usr/src/debug/python310/Python-3.10.13/Modules/_ctypes/callproc.c:1533
#40 0x00007ffff7b4bdf8 in cfunction_call (func=0x7fffbab218a0, args=, kwargs=) at Objects/methodobject.c:552

And indeed this issue was solved by rebuilding ffmpeg with --disable-liblensfun.

This would imply there's an issue in the LensFun regex, but that code hasn't been changed since 2018 according to the git blame. This is strange indeed, so let me know if I'm barking up the wrong tree here.

Can't recreate on current stable Debian and Ubuntu LTS. No venvs used. Perhaps particularity of your venv?

I could reproduce an error on Arch with ffmpeg-amd-full (in a fresh venv with only torchaudio installed). However I got:

terminate called after throwing an instance of 'std::bad_alloc'
 what(): std::bad_alloc

This led me to this: https://stackoverflow.com/questions/51382355/stdregex-and-dual-abi And indeed adding _GLIBCXX_USE_CXX11_ABI=0 fixed the problem for me.

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 75f0ae6a..9bc16e63 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -18,6 +18,8 @@ ELSE()
 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -Wformat-security")
 ENDIF()

+add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0)
+
 # version
 SET(VERSION_MAJOR 0)
 SET(VERSION_MINOR 3)

@sjuxax Could you test this patch?

I'm not sure if this is lensfun's fault and if adding _GLIBCXX_USE_CXX11_ABI=0 could break other software compiled with _GLIBCXX_USE_CXX11_ABI=1.

Confirmed that building LensFun with the patch, and rebuilding ffmpeg with --enable-liblensfun, allows for a successful import of torchaudio.

For reference: Another user with the same issue

Codecov Report

Attention: 66 lines in your changes are missing coverage. Please review.

Comparison is base (29ebe58) 58.80% compared to head (2c237e7) 58.69%.

Files Patch % Lines
src/librawspeed/common/DngOpcodes.cpp 0.00% 23 Missing :warning:
src/librawspeed/common/RawImage.h 0.00% 5 Missing :warning:
src/utilities/rstest/rstest.cpp 0.00% 4 Missing :warning:
src/librawspeed/codes/BinaryPrefixTree.h 0.00% 2 Missing :warning:
src/librawspeed/tiff/TiffEntry.cpp 0.00% 2 Missing :warning:
src/librawspeed/tiff/TiffIFD.cpp 0.00% 2 Missing :warning:
src/utilities/identify/rawspeed-identify.cpp 0.00% 2 Missing :warning:
...zz/librawspeed/decompressors/DummyLJpegDecoder.cpp 0.00% 1 Missing :warning:
src/librawspeed/common/RawImage.cpp 0.00% 1 Missing :warning:
src/librawspeed/common/RawspeedException.cpp 0.00% 1 Missing :warning:
... and 23 more
@@ Coverage Diff @@
## develop #571 +/- ##
===========================================
- Coverage 58.80% 58.69% -0.11% 
===========================================
 Files 235 244 +9 
 Lines 14020 14044 +24 
 Branches 1935 1935 
===========================================
- Hits 8244 8243 -1 
- Misses 5658 5683 +25 
 Partials 118 118 
Flag Coverage Δ
benchmarks 8.39% <0.00%> (-0.02%) :arrow_down:
integration 47.35% <22.22%> (-0.10%) :arrow_down:
linux 56.78% <26.66%> (-0.12%) :arrow_down:
macOS 18.77% <6.06%> (-0.04%) :arrow_down:
rpu_u 47.35% <22.22%> (-0.10%) :arrow_down:
unittests 17.48% <4.44%> (-0.04%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge remote-tracking branch 'upstream/pr/537' into develop

  • upstream/pr/537: Sony ILCE-7CR support

Merge remote-tracking branch 'upstream/pr/565' into develop

  • upstream/pr/565: Canon IXY 220F support

Set conditional HTTP depending on EXIV2_ENABLE_WEBREADY

Develop: protect all calls to dt_dev_invalidate_all() by history mutex locking

Otherwise, what happens is: 1. user changes slider in module n, 2. pipeline recomputes is triggered 3. dt_dev_pixelpipe_change() does a dt_dev_pixelpipe_synch_top() from the last module in history, 4. that last module in history could be n + 1 or n - 1 depending what thread wrote history last. If n + 1, then pipeline starts recomputing at n + 1 using garbled (empty) output of module n.

Develop: do not handle shutdown atomic setting in GUI thred

dt_dev_invalidate(.*) function are called from GUI thread. Locking the pipe mutex there makes GUI hang. Do it in pipeline thread instead.

develop : killswitch mechanism v2

  • dt_dev_invalidate_all() calls outside history mutex locks to avoid deadlocks
  • handle killswitch in dt_dev_invalidate_all()
  • make it wait for pipes to release their busy locks

pixelpipe: checks are mutually exclusive

replace sequenced if with else if since the case are mutually exclusive

Description of the bug

Ansel 3c8f896 reporting an exception

To Reproduce

  1. Install Ansel 3c8f896 or any older build
  2. Run Ansel
  3. After a few seconds, an exception message will pop up
  4. Click on the 'OK' button in the message
  5. The program terminates and writes a log in the following location: C:\Users[...]\AppData\Local\Temp (7 of them are attached)

Expected behavior

  1. Install Ansel 3c8f896 or any older build
  2. Run Ansel
  3. The software opens and is ready to be used

Context

n/a

Screenshots

Screenshot 2023-11-28 125334

Which commit introduced the error

Tried 3c8f896 and a few older versions, including the first Win64 version. None would start. All will show the same exception message.

System

  • darktable version : n/a
  • Ansel versin: 3c8f896
  • OS : Windows 10.0.19041.3636
  • Memory :
  • Graphics card : NVIDIA GeForce RTX 4060 Ti
  • Graphics driver : Game Ready Driver version 546.17
  • OpenCL installed : yes
  • OpenCL activated : yes
  • Xorg : n/a
  • Desktop : n/a
  • GTK+ : ?
  • gcc : ?
  • cflags : ?
  • CMAKE_BUILD_TYPE : ?

Additional context

  • Can you reproduce with another darktable [<-- needs to be replaced by ansel] version(s)? yes with all previous versions
  • Can you reproduce with a RAW or Jpeg or both? n/a
  • Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes or n/a
  • Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? yes or n/a
  • Do you use lua scripts? no

Retrieved logs:

ansel_bt_2HACF2.txt ansel_bt_AFWBF2.txt ansel_bt_DCE6E2.txt ansel_bt_KT57E2.txt ansel_bt_NII0E2.txt ansel_bt_UGGXE2.txt ansel_bt_UUYCF2.txt

This is an issue with your OpenCL driver. Try removing, reinstalling or downgrading it. Seeing that you use a very new driver (Game Ready), I suggest you revert to using the LTS/production-ready version (slightly older).

I am now getting the same issue ansel_bt_JQX1E2.txt ansel_bt_X7CWE2.txt ansel_bt_ZZGIF2.txt ansel_bt_12BDF2.txt ansel_bt_JJ9YE2.txt

I tried your suggestion to use the integrated graphics to display and use my RTX3060 for openGL. I started getting the error. I tried using the latest Darktable and no errors. I then switched back my hdmi cable to the RTX and am still getting errors. First the error was happening in the Crop module. Now it happened when I try to use the Preferences

@dnlyko your issue is different, it's with dtgtk_toggle_button. Please open a different issue.

@EricBright Please confirm whether the solution given by @aurelienpierre works

OpenCL does now have an individual installer on my PC. It seems to be part of another package. Since I have several other software that rely on the performance of my graphic card and OpenCL, I won't be able to downgrade any of the drivers (previous graphic card drivers were causes of frequent BSOD on my box, which I cannot afford to re-experience).

There was, however, this package that I temporarily uninstalled with no positive outcome: OpenCL™ and OpenGL® Compatibility Pack.

At any rate, thank you for looking into the error! 🙏🏽 If it cannot be fixed, then that is that.

For future reference:

ansel.exe caused an Access Violation at location 00007FF9E686385F in module CLOn12Compiler.dll Writing to location 0000000000000008.

AddrPC Params
00007FF9E686385F 00000208E21BD3A0 000000C6471FC300 00000208AF42EED8 CLOn12Compiler.dll!clc_libclc_new_dxil+0x7bf
00007FF9E68649A1 0000000000000001 00007FFA9E4F0000 0000000000000000 CLOn12Compiler.dll!clc_spirv_to_dxil+0xb11
00007FFA3F049A2D 0000000000000000 00000208E10CB7D8 00000208AE3A8770 OpenCLOn12.dll!0x9a2d
00007FFA3F06BAEA 0000000000000000 00000208F5C65160 00000208AD2DD0F0 OpenCLOn12.dll!clUnloadPlatformCompiler+0x5f8a
00007FFA3F06A798 0000020800000000 0000000000000000 00000208E11D9398 OpenCLOn12.dll!clUnloadPlatformCompiler+0x4c38
00007FFA3F06A24F 00000208E24F9AD0 000000C6471FEA18 00000208E169B800 OpenCLOn12.dll!clUnloadPlatformCompiler+0x46ef
00007FFA3F06E20C 00000208E11D9340 0000000000002500 0000000000000000 OpenCLOn12.dll!clBuildProgram+0x17c
00007FFA122C7751 00000208E2103C90 0000000000000000 00000208E1584350 libansel.dll!dt_opencl_build_program+0x91
00007FFA122CDCDF 00000208E1721920 0000000000000000 0000000000000000 libansel.dll!dt_opencl_init+0x253f
00007FFA12229E79 00007FFA00000001 00000208DEBD66F0 0000000000000001 libansel.dll!dt_init+0xd09
00007FF73D8A2CEB 00007FF73D8A1560 00007FF73D8A277D 00007FFA9C0A1270 ansel.exe!0x2ceb
00007FF73D8A14C2 0000000000000000 00007FF73D8A7048 0000000000000000 ansel.exe!0x14c2
00007FF73D8A12F7 0000000000000000 0000000000000000 0000000000000000 ansel.exe!0x12f7
00007FF73D8A1406 0000000000000000 0000000000000000 0000000000000000 ansel.exe!0x1406
00007FFA9CB57344 0000000000000000 0000000000000000 0000000000000000 KERNEL32.DLL!BaseThreadInitThunk+0x14
00007FFA9E5426B1 0000000000000000 0000000000000000 0000000000000000 ntdll.dll!RtlUserThreadStart+0x21

You will need to start the software with OpenCL disabled if you can't downgrade the GPU driver. But as I said, what you have is really really new version of the driver (november 14th 2023) so I highly doubt that whatever app you have requiring GPU drivers will not be able to work with slightly older and more stable drivers. You can try https://www.nvidia.fr/download/driverResults.aspx/217056/fr

dtgtk gradient slider: protect against global panel scrolling

Scrolling is captured only if the widget has the focus, after a click. Fix #232

theme : re-apply recent changes, fix merge commit fuckup

Don't let VS Code handle branch sync for you…

temperature.c: dispatch value changed and history add events properly

Not sure on what side-effect of bauhaus comboboxes it relied to do it implicitly before, but it's wonky design.

Bauhaus.c : be less aggressive on preventing slider setting on unchanged value

d->pos is set on motion_notify, with or without committing (depending if popup or not). It needs to be commited on button_release anyway for safety.

Description of the bug

When scrolling over a module with a parametric masks, the scrolling gets stopped by the input slider for the mask and scrolling starts changing the slider values.

To Reproduce

  1. Set "Blending" to "Parametric Mask"
  2. Try to scroll down past the "Blending dialogue" to the next module

Expected behavior

The slider should only react to scrolling once it was activated by clicking (it's title) - like the other sliders.

Merge pull request #570 from yattaro/develop

Update cameras.xml with values for Canon EOS 550D

The white point value for the Canon EOS 550D/T2i/Kiss X4 when the image is below 200 ISO is missing, resulting in blown highlights. A sample raw image shot on my 550D where this issue can be seen is here.

This PR sets the white point for this camera to 13583 when below 200 ISO.

As mentioned in https://github.com/darktable-org/darktable/issues/15636#issuecomment-1807193990, we really should read these from EXIF.

Could you please take a shot on every ISO level, convert them to DNG and process them with dngmeta.rb to obtain the full set of values?

As mentioned in darktable-org/darktable#15636 (comment), we really should read these from EXIF.

Could you please take a shot on every ISO level, convert them to DNG and process them with dngmeta.rb to obtain the full set of values?

Got it, here's the resulting output of dngmeta.rb:

invsensors {[2048, 13926]=>[100], [2054, 15000]=>[200, 400, 800, 1600, 3200, 6400, 12800]}

sensors [[100, [2048, 13926]], [200, [2054, 15000]], [400, [2048, 15000]], [800, [2048, 15000]], [1600, [2048, 15000]], [3200, [2048, 15000]], [6400, [2054, 15000]], [12800, [2054, 15000]]]
mostfrequent [[2054, 15000], [200, 400, 800, 1600, 3200, 6400, 12800]]
invsensors {[2048, 13926]=>[100]}

Canon EOS 550D

I've updated this PR accordingly.

@yattaro thank you!

Blending: Remove the bottom border of blending options button

Previously blending options button has a bottom border to pretend it as a tab button, because it is aligned with other masking tab buttons, but currently we don't use masking tab buttons, and it is actually a menu button, so remove its bottom border.

Blending: Add padding for masks modes box

It is not a tab bar now, and adding padding makes it easier to read for combo box.

Blending: Rename blending-tabs to blending-header for widget name

It's not a tab bar anymore, we are using combo box.

Pipeline: implement a kill-switch mechanism when history has changed

Abort all pending pipeline computations if history is changed. This allows to spare some seconds of useless computations and restart immediately with new params.

window manager: get window position with gdk_window_get_origin instead of gtk_window_get_position.

Seems to be more reliable with desktop environments ?

Bauhaus: refine the timeout logic

Do not apply timeouts : 

  • on clicks in combobox popups,
  • on drag-and-drop end (button release) on sliders,
  • on drag-and-drop start (button pressed) on slider precision popup

This makes the GUI even more responsive on non-interative settings methods.

Bump mymindstorm/setup-emsdk from 12 to 13

Bumps mymindstorm/setup-emsdk from 12 to 13. - Release notes - Commits


updated-dependencies: - dependency-name: mymindstorm/setup-emsdk dependency-type: direct:production update-type: version-update:semver-major ...

Signed-off-by: dependabot[bot]

Bauhaus.c : Gtk 3.24/Wayland support : fix the order of macros

GTK_CHECK_VERSION(3, 24, 0) returns TRUE if version >= 3.24.0, if/else statements were inverted.

Bumps mymindstorm/setup-emsdk from 12 to 13.

Sourced from mymindstorm/setup-emsdk's releases .

Version 13

Updated to Node 20

d233ac1 v13

1749b22 npm audit fix + update runtime to node20

See full diff in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


You can trigger Dependabot actions by commenting on this PR: - @dependabot rebase will rebase this PR - @dependabot recreate will recreate this PR, overwriting any edits that have been made to it - @dependabot merge will merge this PR after your CI passes on it - @dependabot squash and merge will squash and merge this PR after your CI passes on it - @dependabot cancel merge will cancel a previously requested merge and block automerging - @dependabot reopen will reopen this PR if it is closed - @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - @dependabot show ignore conditions will show all of the ignore conditions of the specified dependency - @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (55fffbb) 63.88% compared to head (5fe945c) 63.88%.

@@ Coverage Diff @@
## main #2851 +/- ##
=======================================
 Coverage 63.88% 63.88% 
=======================================
 Files 103 103 
 Lines 22369 22369 
 Branches 10865 10865 
=======================================
 Hits 14291 14291 
 Misses 5856 5856 
 Partials 2222 2222 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Bauhaus.c: prepare popup positionning for Wayland

Can't make it work properly though.

Bauhaus : fix a bug in coordinates handling for popups

Gtk is weird, it sends motion_notify events on widgets that are not hovered. Since popups are layed over, they behave differently and we can't use their relative coordinates, because there is a chance they belong to another widget. For popups, we grab absolute coordinates in main window space instead.

Bauhaus: lose focus on popups on click outside their frame

Instead of mousing out of the frame.

Revert "Bauhaus: popup positionning: take top DE bar into account if any"

This reverts commit 525966274558fa74ed13be845bac48b54e1a081d.

This is an automatic backport of pull request #2847 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 7 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (6ca5824) 63.98%. Report is 7 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## 0.28.x #2850 +/- ##
==========================================
- Coverage 63.99% 63.98% -0.02% 
==========================================
 Files 103 103 
 Lines 22338 22340 +2 
 Branches 10821 10822 +1 
==========================================
- Hits 14296 14294 -2 
- Misses 5818 5822 +4 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

cppcheck: use try_emplace

Found with stlFindInsert

Signed-off-by: Rosen Penev

cppcheck: use auto when copies are cheap

Found with constVariableReference

Signed-off-by: Rosen Penev

cppcheck: small bool simplification

Found with duplicateConditionalAssign

Signed-off-by: Rosen Penev

cppcheck: remove initialization

Found with redundantInitialization

Signed-off-by: Rosen Penev

cppcheck: add const to pointers

Found with constVariablePointer

Signed-off-by: Rosen Penev

remove boilerplate

Signed-off-by: Rosen Penev

remove some regex to avoid slowdown

Signed-off-by: Rosen Penev

Hi there,

I do not see the information about aperture, focal length, etc. of my .cr3 files anymore.

Expected behavior

See relevant data. In some older version of Ansel this was possible (as far as I remember), usefull for corrections, Denoise, Lens Correction,...

Clipboard02 Clipboard01

System

Win10, i7-2600, Geforce GTX 1050ti Ansel 6f72fc9

All the Best audiomartin

Can you send an affected CR3 file ?

Shure: https://we.tl/t-yrqriblmUl

File is to big to upload on github.

m.

I tried your file and I get an Exiv2 error. Are you sure your camera is supported by Exiv2 ?

Ok, I needed to rebuild Exiv with ISOBMFF support and it finally worked. Seems supported on Linux…

Thanks for taking care!

Still the same with latest version bf804ef of Ansel on Win10.

https://github.com/aurelienpierreeng/ansel/issues/231#issuecomment-1832746980

It is an Eos R5, worked before up to now. With Pictures imported before version 6f72fc9 the Exifdata is still visible.

m.

Your last working version was built using mingw-w64-ucrt-x86_64-exiv2-0.28.1-1 and the new ones are built using mingw-w64-ucrt-x86_64-exiv2-0.28.1-2, so I guess something is wrong in the Win Exiv2 package

There might be some hints for a developer, for me it is far beyond me knowledge:

https://github.com/darktable-org/darktable/issues/15478#issue-1959660414

and

https://github.com/Exiv2/exiv2/issues/2637#issue-1729310589

You probably already found it yourself.

m.

Also @AlynxZhou found this commit on MingW build script that drops explicit support of ISOBMFF in Exiv2, relying on whatever defaults the CMakeList.txt of the project is using.

@audiomartin Did you put your CR3 files in a path contains non-ASCII chars?

YES! For testing i made a folder without Umlaut, it works! All exif data is visible. Great Guys. For the moment i will rename my folders without Umlaut. Thanks for the hints. m.

Bitte schön :D

YES! For testing i made a folder without Umlaut, it works! All exif data is visible. Great Guys. For the moment i will rename my folders without Umlaut. Thanks for the hints. m.

I'd like to have a look on how to fix it tomorrow, it's an Exiv2 issue, but I wish you could accept that as a workaround for you.

Danke schÖn.

@AlynxZhou No problem, just have to remember dropping my äöü. :-)

OK, I managed to build exiv2 0.27 in our CI so the Windows package will grab it.

And I tested it on my Windows with my CR3 file (because I cannot download yours from the link), it shows EXIF data correctly:

图片

I suppose https://github.com/Exiv2/exiv2/pull/2800 could actual fix this problem for exiv2 0.28, but before that gets merged, stick to exiv2 0.27 in our Windows package is acceptable for me, I'll send a PR.

Make the style of new blending bar consistent, please check the screenshots and commit messages for reasons and details.

Before: Screenshot from 2023-11-27 13-44-18

After: Screenshot from 2023-11-27 15-21-09

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 0 Code Smells

No Coverage information No Coverage information 2.1% 2.1% Duplication

Blending & Masking : revert cute & cryptic icon buttons to explicit combobox selection.

Description of the bug

When changing the aspect ratio in crop module the pop up with all the preset ratios exceeds the screen. Without scrollbars it is impossible to reach the bottom entrys.

Screenshots Crop-module-combobox

System

  • Ansel version : 633d9b7
  • OS : Linux
  • Linux - Distro : Pop OS
  • Memory :
  • Graphics card : -
  • Xorg : x11
  • Desktop : GNOME

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (f551534) 63.88% compared to head (20958a5) 63.88%.

@@ Coverage Diff @@
## main #2849 +/- ##
=======================================
 Coverage 63.88% 63.88% 
=======================================
 Files 103 103 
 Lines 22381 22381 
 Branches 10873 10873 
=======================================
 Hits 14297 14297 
 Misses 5862 5862 
 Partials 2222 2222 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 14 lines in your changes are missing coverage. Please review.

Comparison is base (f551534) 63.88% compared to head (e41e13d) 63.89%.

Files Patch % Lines
src/http.cpp 0.00% 4 Missing :warning:
src/tiffvisitor_int.cpp 50.00% 1 Missing and 3 partials :warning:
src/sonymn_int.cpp 85.71% 0 Missing and 2 partials :warning:
src/basicio.cpp 0.00% 0 Missing and 1 partial :warning:
src/crwimage.cpp 0.00% 0 Missing and 1 partial :warning:
src/image.cpp 0.00% 1 Missing :warning:
src/xmp.cpp 0.00% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## main #2848 +/- ##
==========================================
+ Coverage 63.88% 63.89% +0.01% 
==========================================
 Files 103 103 
 Lines 22381 22371 -10 
 Branches 10873 10866 -7 
==========================================
- Hits 14297 14293 -4 
+ Misses 5862 5856 -6 
 Partials 2222 2222 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Export : set format combobox properly at init time

Seemed it relied on value-changed signal side-effects to be set previously. Protect comboboxes setting from segfaults

Bauhaus : change "on-click" behaviour when in precision popup

Commit motion_notify events to pipeline if dragging the slider in the precision popup (after a click)

Fixes: https://github.com/Exiv2/exiv2/issues/2831

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (f551534) 63.88% compared to head (586efa9) 63.87%. Report is 1 commits behind head on main.

Files Patch % Lines
src/datasets.cpp 66.66% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## main #2847 +/- ##
==========================================
- Coverage 63.88% 63.87% -0.01% 
==========================================
 Files 103 103 
 Lines 22381 22379 -2 
 Branches 10873 10872 -1 
==========================================
- Hits 14297 14295 -2 
 Misses 5862 5862 
 Partials 2222 2222 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Feels like we should backport this one?

Might as well.

@Mergifyio backport 0.28.x

backport 0.28.x

✅ Backports have been created

Bauhaus: Fix wrongly floored value of percentage sliders

When format of a slider is set to percentage, it will decrease 2 digits and set factor to 100, so when we set normalized value of the slider, we need to handle both digits and factor.

In the large refator of commit 48a86a9c0353f87444fd6956f087af933ecad599, we forget to multiply factor while setting normalized value, this makes percentage sliders always get floored to integer value and lose the float value.

This commit fixes it by multiplying factor while setting normalized value.

Description of the bug

When importing a folder the jpg metadata is not matched to the corresponding raw file, if there is Umlaut (e.g., ö) in the folder name.

To Reproduce

  1. Go to Import
  2. Select folder with Umlaut (e.g, ö)
  3. Import
  4. See metadata is empty (no exposure, camera brand, etc.)

Expected behavior

Just like importing from any other folder (i.e., raws having metadata)

Context

import.zip

Which commit introduced the error

Using prepackaged Windows Builds, the first broken version is this: ansel-eb70788-win64.exe

Meaning, it is working in version: ansel-43e4fce-win64.exe

System

Windows system

Additional context Reproducable with all newer versions newer than the first broken version ansel-eb70788-win64.exe.

Very probably related to #231

See https://github.com/aurelienpierreeng/ansel/issues/231 for a workaround, this issue is with Exiv2

Merge pull request #568 from LebedevRI/coverage-windows

[CI] Consolidate codecov uploads into a separate post-job

[CI] Cache coverage reports

This partially reverts commit 1a35b6213c5aba276d51b1134f4460d00b857baa.

When format of a slider is set to percentage, it will decrease 2 digits and set factor to 100, so when we set normalized value of the slider, we need to handle both digits and factor.

In the large refator of commit 48a86a9c0353f87444fd6956f087af933ecad599, we forget to multiply factor while setting normalized value, this makes percentage sliders always get floored to integer value and lose the float value.

This commit fixes it by multiplying factor while setting normalized value.

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 0 Code Smells

No Coverage information No Coverage information 2.2% 2.2% Duplication

Awesome, thanks !

Description of the bug

On the main website there is a link with a button called "Supported Cameras" that leads to the malicious website: https://rawspeed.org/CameraSupport.html showing ads, and antivirus scam.

That was the former Rawspeed domain, that wasn't renewed, and is now squatted. The camera support page is a generated by a Python static website generator, from the source code. I need to setup a Github Page with that and an action.

The CMake-Windows workflow currently fails with ModuleNotFoundError: No module named 'setuptools'. Fixed by installing the python package setuptools. Implemented as described here.

Indeed.

Merge remote-tracking branch 'upstream/pr/566' into develop

  • upstream/pr/566: NefDecoder: simplify linearization curve search NefDecoder: refactor linearization curve detection

This is an automatic backport of pull request #2830 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 6 lines in your changes are missing coverage. Please review.

Comparison is base (72129e7) 63.99% compared to head (d0cef52) 63.98%. Report is 6 commits behind head on 0.28.x.

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
@@ Coverage Diff @@
## 0.28.x #2845 +/- ##
==========================================
- Coverage 63.99% 63.98% -0.02% 
==========================================
 Files 103 103 
 Lines 22338 22342 +4 
 Branches 10821 10823 +2 
==========================================
 Hits 14296 14296 
- Misses 5818 5822 +4 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Wonder if CI changes should be backported as well.

Wonder if CI changes should be backported as well.

Probably...

View manager gui init : set the global variable gui->reset

Prevents bauhaus widgets from sending value-changed signals at init times, because some callbacks imply user interactions and don't make sense otherwise.

Coding style : improve bauhaus

Declare const and struct on variables

colorchart: fix a bug in dead code to please Sonar Cloud static analysis.

Won't change anything, but we live to make bots happy, these days.

Bauhaus: major refactoring/partial rewrite

  1. simplify the control flow, avoid recomputing everything all the time, limit redrawings
  2. have cursor coordinates computed only in motion_notify callbacks
  3. dispatch value-changed events only on button_released events (instead of everywhyre, button_pressed and motion_notify), so we don't start a pipeline recompute on every mouse interaction.
  4. be sure to redraw widgets on every interaction BEFORE sending value-changed signal and dispatching pipeline params commits. That makes the GUI more responsive and less frustrating, giving immediate feedback about value changing, while the pipeline recomputes may take some time
  5. have a timeout on all scrolling events (comboboxes and sliders) so intermediate scroll steps don't lead to pipeline recomputes until scrolling is finished. This may lead to a small lag in GUI response when scrolling on comboboxes, but it can prevents many intermediate useless pipeline recomputes, so overall it's better.
  6. dispatch value-changed event ONLY if the value actually changed.
  7. Remove duplicated (copy-pasted) code computing coordinates and translations. All pixel computations are done in getters/setters functions in one place. Fix many inconsistencies in coordinates recomputings because of isolated changes not propagated in all the right places.
  8. Handle text drawing using uniform bounding-box adjustment.
  9. Remove useless features needlessly complexifying code, like the curve and center alignment
  10. Do not shift the combobox popup in the viewport when scrolling. This was meant to keep the overlayed item in front of the (non-popup) label, but there is no guaranty that the shifted popup stays entirely visible in viewport. We rely on native Gtk popup positionning and that's it.
  11. Dispatch hovered and active CSS rules on pseudo-elements in comboboxes, allowing to style them.
  12. Replace expensive calls to powf by optimized integer ipow.
  13. Handle clicks on comboboxes chevrons "quad" too, as in any select field in any GUI toolkit.
  14. Remove the ability to add shortcuts on sliders and comboboxes because that things needs to be removed entirely. Will be replaced with native Gtk accels that perfectly did the job until 2021.
  15. Add shitloads of Doxygen docstrings for future dev doc.
  16. Fix a couple of silent bugs : doing boolean operations on floats, comparing floats with integers or double, etc.

Description of the bug

Two checkboxes are missing their labels when importing files from disk.

To Reproduce

  1. I'm running build f7669af with language manually changed to English
  2. Click on Open from disk... button in the Open & import section on the left

Expected behavior

All checkboxes have labels.

Context

Screenshots

ansel_PhUZi6wCIQ

Import popup got rewritten.

HTTP support is being enabled regardless of the fact EXIV2_ENABLE_WEBREADY is configured or not. This PR makes sure http is not included/built if EXIV2_ENABLE_WEBREADY specifically disabled.

This is an alternative to https://github.com/Exiv2/exiv2/pull/2843 PoC: https://github.com/xbmc/xbmc/pull/24109

Review these changes using an interactive CodeSee Map

Legend

@kmilos this should address your comment here

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (2a55877) 63.88% compared to head (8c5f0ce) 63.88%.

@@ Coverage Diff @@
## main #2844 +/- ##
=======================================
 Coverage 63.88% 63.88% 
=======================================
 Files 103 103 
 Lines 22369 22369 
 Branches 10865 10865 
=======================================
 Hits 14291 14291 
 Misses 5856 5856 
 Partials 2222 2222 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

from what I've seen elsewhere, exiv2 uses a mix of conditional compilation and various stubs.

Does it make sense to make public API conditional?

from what I've seen elsewhere, exiv2 uses a mix of conditional compilation and various stubs.

Does it make sense to make public API conditional?

IMHO it makes sense not to include the public headers if they can't be somehow used (e.g. as a result of a disabled option). This is what is done on this PR (similar to video disabled, PNG disabled, etc).

The fact that for some files you have to ifdef the implementation just shows there's room to improve the current architecture and split the implementation further. In an ideal world you'd only have to ifdef factory conditions and similar components. But that is a major undertaking and certainly something out of the scope of my proposed PRs.

There seems to be no way to export photos while in the darkroom view.

Yes, and that is a safety measure because in darkroom, you already have 2 image processing pipelines grabbing memory (thumbnail and full preview), so exporting (possibly at full res) opens a 3rd pipeline, and since the memory handling code is broken, it will attempt to spawn a pipe even if there is not enough (v)RAM left, resulting in crashes and loss of your edit.

To export, go back in lighttable, have only one pipeline processing at a time, reduce the surface for crashes.

Description of the bug

Selecting cinemascope aspect ratio in module Crop causes "invalid ratio format. it should be a positive number" message to appear.

To Reproduce

Select a photo and select crop and select cinemascope aspect ratio

Expected behavior

I get the selected aspect ratio

Context

Screenshots

Screenshot from 2023-11-23 14-43-43

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (cff96fd) 58.80% compared to head (fc463b3) 58.80%.

@@ Coverage Diff @@
## develop #568 +/- ##
========================================
 Coverage 58.80% 58.80% 
========================================
 Files 235 235 
 Lines 14020 14020 
 Branches 1935 1935 
========================================
 Hits 8244 8244 
 Misses 5658 5658 
 Partials 118 118 
Flag Coverage Δ
benchmarks 8.41% <ø> (ø)
integration 47.45% <ø> (ø)
linux 56.89% <ø> (ø)
macOS 18.81% <ø> (ø)
rpu_u 47.45% <ø> (ø)
unittests 17.51% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #567 from LebedevRI/benchmark

[CI] Enable building benchmarks on windows

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (a626bcd) 58.81% compared to head (228b013) 58.81%.

@@ Coverage Diff @@
## develop #567 +/- ##
========================================
 Coverage 58.81% 58.81% 
========================================
 Files 235 235 
 Lines 14019 14019 
 Branches 1934 1934 
========================================
 Hits 8245 8245 
 Misses 5656 5656 
 Partials 118 118 
Flag Coverage Δ
benchmarks 8.41% <ø> (+0.01%) :arrow_up:
integration 47.46% <ø> (ø)
linux 56.90% <ø> (ø)
macOS 18.81% <ø> (ø)
rpu_u 47.46% <ø> (ø)
unittests 17.51% <ø> (-0.02%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

NefDecoder: refactor linearization curve detection

Avoids throwing exception when only the tag 0x96 exists.

Avoids throwing exception when only the tag 0x96 exists.

For an example see https://github.com/darktable-org/darktable/issues/15562#issuecomment-1824526754

This is again related to https://github.com/darktable-org/darktable/issues/5149

Codecov Report

Attention: 5 lines in your changes are missing coverage. Please review.

Comparison is base (a626bcd) 58.81% compared to head (6118507) 58.80%. Report is 7 commits behind head on develop.

Files Patch % Lines
src/librawspeed/decoders/NefDecoder.cpp 33.33% 4 Missing :warning:
src/librawspeed/tiff/TiffIFD.cpp 0.00% 1 Missing :warning:
@@ Coverage Diff @@
## develop #566 +/- ##
===========================================
- Coverage 58.81% 58.80% -0.02% 
===========================================
 Files 235 235 
 Lines 14019 14020 +1 
 Branches 1934 1936 +2 
===========================================
- Hits 8245 8244 -1 
- Misses 5656 5658 +2 
 Partials 118 118 
Flag Coverage Δ
benchmarks 8.39% <0.00%> (-0.01%) :arrow_down:
integration 47.45% <33.33%> (-0.02%) :arrow_down:
linux 56.89% <33.33%> (-0.02%) :arrow_down:
macOS 18.81% <0.00%> (-0.01%) :arrow_down:
rpu_u 47.45% <33.33%> (-0.02%) :arrow_down:
unittests 17.52% <0.00%> (-0.01%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@LebedevRI Can we please address this for the upcoming dt 4.6 please?

@LebedevRI Can we please address this for the upcoming dt 4.6 please?

Reminder that i've yet again no idea what the actual schedule is. I do not understand this secrecy at all. Why can it not be posted to discourse.

Reminder that i've yet again no idea what the actual schedule is. I do not understand this secrecy at all. Why can it not be posted to discourse.

It's public on the wiki, and the process hasn't changed in a while: https://github.com/darktable-org/darktable/wiki/Releases-cycle

Pascal also sends out a reminder on the mailing list, don't know about other channels...

@kmilos thank you. We'll see when this gets to dt, i kinda want to let things settle just a bit before updating stable.

Thanks. Sure, there are 3 more weeks by the looks of it.

@LebedevRI FYI, another dt schedule and code freeze confirmation.

@LebedevRI FYI, another dt schedule and code freeze confirmation.

Assuming things didn't get tagged weeks in advance, propagated.

Remove the option to "prompt for name on addition of new instance" as stated here: Transitoning from Darktable - Preferences

The option allowing to immediately rename a new instance of a module has been removed and users will always be prompted to rename a new instance right after adding it. This simply encourages sane workflows because everybody adding unnamed instances has always hated themselves shortly after. Be nice to your future self people (no, you will not remember).

Add Sigma 15mm f/2.8 EX DG Diagonal Fisheye

upload 765014

When building for Universal Windows Platform (UWP) gai_strerror from ws2tcpip.h returns WCHAR * instead of char * see https://learn.microsoft.com/en-us/windows/win32/api/ws2tcpip/nf-ws2tcpip-gai_strerrorw. Hence this requires a reinterpret_cast to the expected error() argument type.

PoC: https://github.com/xbmc/xbmc/pull/24109

Review these changes using an interactive CodeSee Map

Legend

Sounds like CI should be added.

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (4232888) 63.90% compared to head (4d33f88) 63.90%.

Files Patch % Lines
src/http.cpp 0.00% 1 Missing :warning:
@@ Coverage Diff @@
## main #2843 +/- ##
=======================================
 Coverage 63.90% 63.90% 
=======================================
 Files 103 103 
 Lines 22370 22370 
 Branches 10866 10866 
=======================================
 Hits 14295 14295 
 Misses 5855 5855 
 Partials 2220 2220 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

I will superseed this with another PR, reinterpret cast may not be safe here (better to use the windows API). Will also add CI for validation.

And simply disabling the web feature for UWP is not an option?

And simply disabling the web feature for UWP is not an option?

@kmilos as far as I can see http support is built regardless of any configuration option available right now (e.g. even if webready is disabled). Do you mean guarding the HttpIO inclusion depending on EXIV2_ENABLE_WEBREADY? That would work for me (as long as we can disable filesystem access too since that also triggers other issues on uwp)

Ah, sorry, I assumed it was gated by EXIV2_ENABLE_WEBREADY already...

See https://www.itu.int/rec/T-REC-T.814/en

Unfortunately couldn't find public samples or an encoder to generate one easily.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 3 lines in your changes are missing coverage. Please review.

Comparison is base (4232888) 63.90% compared to head (72d948c) 63.89%.

Files Patch % Lines
src/jp2image.cpp 33.33% 1 Missing and 1 partial :warning:
src/jp2image_int.cpp 50.00% 0 Missing and 1 partial :warning:
@@ Coverage Diff @@
## main #2842 +/- ##
==========================================
- Coverage 63.90% 63.89% -0.01% 
==========================================
 Files 103 103 
 Lines 22370 22373 +3 
 Branches 10866 10869 +3 
==========================================
 Hits 14295 14295 
- Misses 5855 5856 +1 
- Partials 2220 2222 +2 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

See https://www.itu.int/rec/T-REC-T.815/en

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 2 lines in your changes are missing coverage. Please review.

Comparison is base (4232888) 63.90% compared to head (6d80f8c) 63.90%.

Files Patch % Lines
src/bmffimage.cpp 50.00% 2 Missing :warning:
@@ Coverage Diff @@
## main #2841 +/- ##
==========================================
- Coverage 63.90% 63.90% -0.01% 
==========================================
 Files 103 103 
 Lines 22370 22374 +4 
 Branches 10866 10868 +2 
==========================================
+ Hits 14295 14297 +2 
- Misses 5855 5857 +2 
 Partials 2220 2220 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Hello,

Please find the fix for the issue #165 . This corrects the behaviour met by windows users when they try to filter collections by date.

André

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 0 Code Smells

No Coverage information No Coverage information No Duplication information No Duplication information

Thanks !

This is an automatic backport of pull request #2839 done by Mergify. Cherry-pick of 42328889b5bc350d7e3b052641e667b0bddbdaba has failed:

On branch mergify/bp/0.28.x/pr-2839
Your branch is up to date with 'origin/0.28.x'.

You are currently cherry-picking commit 42328889b.
 (fix conflicts and run "git cherry-pick --continue")
 (use "git cherry-pick --skip" to skip this patch)
 (use "git cherry-pick --abort" to cancel the cherry-pick operation)

Unmerged paths:
 (use "git add ..." to mark resolution)

both modified: src/canonmn_int.cpp

no changes added to commit (use "git add" and/or "git commit -a")

To fix up this pull request, you can check it out locally. See documentation: https://docs.github.com/en/github/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/checking-out-pull-requests-locally


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (72129e7) 63.99% compared to head (bdc2ae6) 63.99%. Report is 5 commits behind head on 0.28.x.

@@ Coverage Diff @@
## 0.28.x #2840 +/- ##
=======================================
 Coverage 63.99% 63.99% 
=======================================
 Files 103 103 
 Lines 22338 22338 
 Branches 10821 10821 
=======================================
 Hits 14296 14296 
 Misses 5818 5818 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Window manager : ensure the config window size is loaded before sanitization is called.

Review these changes using an interactive CodeSee Map

Legend

@Mergifyio backport 0.28.x

backport 0.28.x

✅ Backports have been created

Merge pull request #2838 from norbertwg/translate-Exif.NikonAf2.AFAreaMode-

Translate exif.nikon af2.af area mode - enhanced

Merge branch 'main' of https://github.com/norbertwg/exiv2

Addresses https://github.com/darktable-org/darktable/issues/15598

See https://global.canon/en/c-museum/product/dcc619.html for regional aliases.

Edit: went for the max crop for the CHDK mode, different/larger than the DNG.

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (a626bcd) 58.81% compared to head (9f9d745) 58.81%. Report is 17 commits behind head on develop.

@@ Coverage Diff @@
## develop #565 +/- ##
========================================
 Coverage 58.81% 58.81% 
========================================
 Files 235 235 
 Lines 14019 14019 
 Branches 1934 1934 
========================================
 Hits 8245 8245 
 Misses 5656 5656 
 Partials 118 118 
Flag Coverage Δ
benchmarks 8.39% <ø> (ø)
integration 47.46% <ø> (ø)
linux 56.90% <ø> (ø)
macOS 18.81% <ø> (ø)
rpu_u 47.46% <ø> (ø)
unittests 17.52% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@kmilos thank you!

Merge remote-tracking branch 'upstream/pr/564' into develop

  • upstream/pr/564: cpu-cache-line-size.cpp: RISC-V does support querying L1D cache line size cpu-cache-line-size.cpp: getauxval(AT_L1D_CACHEGEOMETRY) Rewrite cpu-cache-line-size.cpp to be more modular

Merge remote-tracking branch 'upstream/pr/559' into develop

  • upstream/pr/559: Prefer vendor crop for Panasonic DMC-LX7

cpu-cache-line-size.cpp: getauxval(AT_L1D_CACHEGEOMETRY)

While i thought this would work for RISCV64, it does not...

As @kmilos pointed out in #2835, there may be more than one way for translation as documented in https://exiftool.org/TagNames/Nikon.html#AFInfo2. Based on that information, translation now also depends on the value of Exif.NikonAf2.ContrastDetectAF. The array with values and their translations is limited to those, which I could verify with Nikon NX Studio. The table from exiftool has additional values and I tried several of them, but Nikon NX Studio did not show the related texts. So I did not include them.

Review these changes using an interactive CodeSee Map

Legend

Thanks. You'll need to rebase as the previous version was already merged.

@kmilos I did a rebase.

I did a rebase.

See the merge conflict messages below?

Codecov Report

Attention: 2 lines in your changes are missing coverage. Please review.

Comparison is base (e8326ba) 63.89% compared to head (a1d2139) 63.90%.

Files Patch % Lines
src/nikonmn_int.cpp 77.77% 0 Missing and 2 partials :warning:
@@ Coverage Diff @@
## main #2838 +/- ##
=======================================
 Coverage 63.89% 63.90% 
=======================================
 Files 103 103 
 Lines 22361 22370 +9 
 Branches 10861 10866 +5 
=======================================
+ Hits 14288 14295 +7 
 Misses 5855 5855 
- Partials 2218 2220 +2 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@kmilos the merge conflicts are gone now, but Commits tab shows four commits, which surprises me a little bit (I am still in a Git learning process). Checking the tab "Files changed" I see the changes i intended to do on top of #2835, so I think it is fine now.

That's because you're merging from the main branch into your PR branch, instead of rebasing your PR branch on top of the main branch. Yeah, git takes some practice, you get there eventually 😉

Ok, do I still have a chance to fix it or would it be better create a new branch in my fork and make a new pull request?

Ok, do I still have a chance to fix it or would it be better create a new branch in my fork and make a new pull request?

No need, it'll work out.

Remove regression tests

  1. They are written in Ruby
  2. They use samples from https://rawsamples.ch/index.php/en/
  3. They seem to output an HTML website using a jQuery version having known security issues
  4. They are 8 years old
  5. They are documented nowhere

pthreads : prevent mutexes locks to be destroyed while they are still captured.

Static analysis errors : compare float to float constant in graduatednd.c

Comparing float with double M_PI is never true

Static analysis errors : remove unused variable in ashift_nmsimplex.c

k is set and incremented but never used (was a debug variable).

Merge pull request #563 from LebedevRI/ci

Attempt to fix XSD validation w/ libxml2 2.12.0

Static analysis errors : remove unused variable in tiling.c

k was set and incremented but never read.

Attempt to fix XSD validation w/ libxml2 2.12.0

Validation for CFA2 variant of CFA encoding is rather partial anyways, but it isn't even legal to try to match the same element as different XSD types...

Static analysis errors : fix bad call in lib.c

position() takes no argument.

Static analysis errors : unused variable in develop.c

Rewrite a for loop with a while to avoid incrementing a dummy counter.

Deprecate the chart tool

It's old code with several bugs that isn't worth fixing since it relies on basecurve.

Validation for CFA2 variant of CFA encoding is rather partial anyways, but it isn't even legal to try to match the same element as different XSD types...

Fixes #562.

colorchart.c: fix static analysis Clang 15 error

Remove unused variable

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

darktable-org

/

rawspeed

Public

Notifications

Fork 116

Star 324

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jump to bottom

WXS schema failed to compile #562

Closed

aurelienpierre opened this issue Nov 18, 2023 · 2 comments

· Fixed by #563

Closed

WXS schema failed to compile

562

aurelienpierre opened this issue Nov 18, 2023 · 2 comments

· Fixed by #563

Comments

Copy link

Member

aurelienpierre

commented

Nov 18, 2023

On Windows (Github image windows-latest ) with:

GNU GCC 13.2.0,

LibXml2: 2.12.0

Pugixml 1.14

dev branch at 94a6783

The previous successful run was with LibXml2 2.11.5.

The text was updated successfully, but these errors were encountered:

Copy link

Member

LebedevRI

commented

Nov 18, 2023

In the mean time, there's escape hatch.

LebedevRI

mentioned this issue

Nov 18, 2023

Attempt to fix XSD validation w/ libxml2 2.12.0 #563

Merged

LebedevRI

closed this as completed in

563

Nov 18, 2023

Copy link

Member

LebedevRI

commented

Nov 18, 2023

@aurelienpierre thank you. Fixed in , and propagated to (which is the recommended branch to use FYI).

Sign up for free to join this conversation on GitHub . Already have an account? Sign in to comment

Labels

None yet

2 participants

You can’t perform that action at this time.

The Sigma 24mm F1.4 DG HSM | A is not recognized by darktable (master, [eb2d8420]). The lens is just recognized as "126", i.e. its lens ID. Reading the meta data from the RAW with the latest version of exiftool (12.6.9.0) on Windows, it reports the following:

Lens Type: G Lens: 24mm f/1.4 Lens Data Version: 0204 Lens ID Number: 126 Lens F Stops: 7.00 Lens ID: Sigma 24mm F1.4 DG HSM | A Lens Spec: 24mm f/1.4 G

Seems as if it was added not too long time ago, since an older version of exiftool (12.4.4.0) I tried was not able to detect it. It would be great, if this lens may be added to the database. I provided correction data in #2126

if darktable doesn’t identify the lens you might have a look at https://github.com/darktable-org/darktable/wiki/User's-portal#my-lens-is-wrongly-identified you need to help exiv2 to identify it correctly from raw file’s metadata

the lens itself is in the slr-sigma.xml:

Sigma 24mm f/1.4 DG HSM | [A] Art 015
Sigma 24mm f/1.4 DG HSM | A

Thanks so much @MStraeten, I finally managed to let darktable automatically detect that lens 🦾

I created %USERPROFILE%\exiv2.ini with the following entry

[nikon] 126=Sigma 24mm F1.4 DG HSM | A

Apart from this, I also needed to add the correction values that I provided in #2126 to slr-sigma.xml, since the therein existing data is related to a crop factor of 1 and not 1.5 and thus resulting in different correction values.

@Macchiato17 Your profile in #2126 is added to the db. A lens issue should be created in the exiv2 project. Providing a picture, some informations and a link to #2126 would be necessary. Do you want to do that?

Thanks so much @tuxfanx for processing my request so quickly. I just raised an issue in the exiv2 project.

By this request I provide the calibration data (distortion, tca) for the Sigma 24mm f/1.4 DG HSM Art lens. This lens already exists in lensfun, but just for full frame. This calibration data is intended for APS-C (shot at Nikon F AF mount). Sigma 24mm DG HSM Art APS-C Nikon-F.zip

I will try to add vignetting correction to this data within the next few days. I will update this issue accordingly.

Could you provide a raw image?

I'd like to, but upload here unfortunately is restricted to 25MB, zipping the RAW didn't help much 🤪

Use the lensfun upload server. I will find the picture there.

OK, I just uploaded the package with a reference to this issue.

CI : remove Ubuntu 20.04

No GCC12 available on Ubuntu 20.04.

Header panel : force the menu bar to be visible all the time except in "no panels" (preview) mode

Users coming from dt took the habit to have the header bar collapsed because there is nothing useful in there. That won't work with Ansel.

Image.c : reintroduce local copy fetching in dt_image_write_sidecar_file

Not having it seems to make XMP writing fail randomly when importing new pictures. That makes no sense considering the original file is still being fetched and should be present at import time. We are most likely looking at a deeper problem buried in the depths of the function tree.

Math: add fast integer power function

Some idiot (actually, always the same) thought it acceptable to compute powf(10.f, (int)digits) in GUI slider code. If you are going to use your computer as a toaster, at least mine Bitcoin.

The XMP files are not being saved in the original picture file folder. It works on 43e4fcex, but not after.

Is this by design, if so, where are the XMP files being saved to ?

System * OS : Windows11 * Memory :64GB * Graphics card :RTX3060 * Graphics driver : * OpenCL installed :Yes * OpenCL activated :Yes

I also noticed this, not sure whether this is a bug or by-design too.

By the way I think you mean commit 43e4fce, there should be no x in hex code.

Hmm I'm using f7669af, built myself under Ubuntu and .xmp files are being saved in the image folder.

To quote @aurelienpierre (matrix chat, 18. November 2023): "The XMP are saved along the images, nothing has changed here if they are not then we have a problem"

This has been fixed

Merge pull request #561 from LebedevRI/ci

Fix clang/i386 OBS builds

FileReader::readFile(): avoid performing tautological comparison

clang for i586 on suse warns:

[ 44s] /home/abuild/rpmbuild/BUILD/rawspeed-v3.5~git1432.65b347d/src/librawspeed/io/FileReader.cpp:71:16: error: result of comparison 'size_t' (aka 'unsigned int') > 4294967295 is always false [-Werror,-Wtautological-type-limit-compare]
[ 44s] 71 | if (fileSize > std::numeric_limits::max())
[ 44s] | ~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

clang for i586 on suse warns:

[ 44s] /home/abuild/rpmbuild/BUILD/rawspeed-v3.5~git1432.65b347d/src/librawspeed/io/FileReader.cpp:71:16: error: result of comparison 'size_t' (aka 'unsigned int') > 4294967295 is always false [-Werror,-Wtautological-type-limit-compare]
[ 44s] 71 | if (fileSize > std::numeric_limits::max())
[ 44s] | ~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Codecov Report

Attention: 19 lines in your changes are missing coverage. Please review.

Comparison is base (65b347d) 58.81% compared to head (4264e0b) 58.81%.

Files Patch % Lines
src/librawspeed/decoders/ArwDecoder.cpp 47.22% 19 Missing :warning:
@@ Coverage Diff @@
## develop #561 +/- ##
========================================
 Coverage 58.81% 58.81% 
========================================
 Files 235 235 
 Lines 14014 14019 +5 
 Branches 1934 1934 
========================================
+ Hits 8242 8245 +3 
- Misses 5654 5656 +2 
 Partials 118 118 
Flag Coverage Δ
benchmarks 8.39% <0.00%> (-0.01%) :arrow_down:
integration 47.46% <51.28%> (+<0.01%) :arrow_up:
linux 56.90% <51.28%> (+<0.01%) :arrow_up:
macOS 18.81% <0.00%> (-0.01%) :arrow_down:
rpu_u 47.46% <51.28%> (+<0.01%) :arrow_up:
unittests 17.52% <0.00%> (-0.01%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Hey,

I am trying to add exiv2 as a dependency for Kodi/XBMC as it seems the most complete GPL2 library for image metadata extraction (since it covers both EXIF and IPTC which we currently do with custom parsers - pretty much untouched since the original xbox days ~ 14 years ago). As you might easily guess, it's pretty much broken by now :)

Unfortunately, since the application is available on multiple platforms some of which are not supported by default by exiv2 (e.g. android, tvos, ios, xbox/uwp, etc) filesystem access is not something that can be taken for granted. The usage we plan to do of the library only requires that we are able to read the image (for tag extraction) from a memory buffer - hence we only really need the MemIO.

https://github.com/xbmc/xbmc/pull/24109/files#diff-40b93a8f1858865324006f152285b7b28de2a1b8e878cb9e04ac4f4f4d11dd0fR35-R36

PoC for inclusion is being done in https://github.com/xbmc/xbmc/pull/24109

This PR proposes an additional option EXIV2_ENABLE_FILESYSTEM_ACCESS to allow disable FileIO and generic filesystem access. This is only valid if unit tests, the exiv app, fuzzer, etc are not enable so I think it shouldn't affect the normal behaviour of the application. In case this is accepted probably the CI should have a way to validate further builds.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 6 lines in your changes are missing coverage. Please review.

Comparison is base (9f90144) 63.88% compared to head (96f9e9a) 63.88%.

Files Patch % Lines
src/basicio.cpp 0.00% 6 Missing :warning:
@@ Coverage Diff @@
## main #2837 +/- ##
=======================================
 Coverage 63.88% 63.88% 
=======================================
 Files 103 103 
 Lines 22369 22369 
 Branches 10865 10865 
=======================================
 Hits 14291 14291 
 Misses 5856 5856 
 Partials 2222 2222 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Added a new commit to add a new job to validate a build with filesystem disabled. This likely won't run as part of this PR but you can check the result here: https://github.com/enen92/exiv2/actions/runs/6937027229/job/18870288972?pr=2

rebased after https://github.com/Exiv2/exiv2/pull/2844 got merged

Merge pull request #560 from LebedevRI/misc

Misc touchups

Codecov Report

Attention: 55 lines in your changes are missing coverage. Please review.

Comparison is base (7b2ba51) 58.75% compared to head (1e71113) 58.81%.

Files Patch % Lines
src/librawspeed/decoders/DngDecoder.cpp 34.78% 29 Missing and 1 partial :warning:
src/librawspeed/decoders/NefDecoder.cpp 0.00% 13 Missing :warning:
src/librawspeed/decompressors/VC5Decompressor.cpp 22.22% 7 Missing :warning:
src/utilities/rsbench/main.cpp 0.00% 2 Missing :warning:
src/librawspeed/common/DngOpcodes.cpp 0.00% 1 Missing :warning:
src/librawspeed/common/RawImageDataFloat.cpp 0.00% 1 Missing :warning:
...awspeed/decompressors/UncompressedDecompressor.cpp 93.33% 1 Missing :warning:
@@ Coverage Diff @@
## develop #560 +/- ##
===========================================
+ Coverage 58.75% 58.81% +0.06% 
===========================================
 Files 235 235 
 Lines 14017 14014 -3 
 Branches 1935 1934 -1 
===========================================
+ Hits 8235 8242 +7 
+ Misses 5665 5654 -11 
- Partials 117 118 +1 
Flag Coverage Δ
benchmarks 8.40% <15.00%> (-0.01%) :arrow_down:
integration 47.46% <34.14%> (+0.05%) :arrow_up:
linux 56.90% <48.80%> (+0.05%) :arrow_up:
macOS 18.82% <18.68%> (+0.01%) :arrow_up:
rpu_u 47.46% <34.14%> (+0.05%) :arrow_up:
unittests 17.53% <4.00%> (+0.01%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Clang: drop hopefully-obsolete thread-safety-analysis false-positive workaround

Describe the bug

exiv2 -m file fails when value is filled with whitespaces.

To Reproduce

Create commands file metadata.txt with following content:

set Exif.Photo.Flash Short 0
set Exif.Photo.FocalLength Rational 350/10
set Exif.Photo.FocalLengthIn35mmFilm Short 35
set Exif.Photo.LensSpecification Rational 350/10 350/10 140/100 140/100
set Exif.Photo.LensMake Ascii 
set Exif.Photo.LensModel Ascii 35mm f/1.4E

(note the whitespaces in Exif.Photo.LensMake line)

Try to apply this metadata to an image:

exiv2 -m metadata.txt image.jpg

Error:

metadata.txt, line 5: Invalid command line
exiv2: Error parsing -m option arguments
Usage: exiv2 [ option [ arg ] ]+ [ action ] file ...

Expected behavior

Specified value is applied as is, or an empty string is applied.

Desktop (please complete the following information):

  • OS and version: macOS 14.1.1 / arm64
  • Exiv2 version and source: exiv2 0.28.1 via Homebrew
  • Compiler and version: N/A
  • Compilation mode and/or compiler flags: N/A

Additional context

Can also be reproduced with exiv2 -M "set Exif.Photo.LensMake Ascii " image.jpg (note the space before closing ", exiv2 -M "set Exif.Photo.LensMake Ascii" image.jpg or exiv2 -M "set Exif.Photo.LensMake Ascii ''" image.jpg work fine).

Looks like a regression, I believe previous version was able to process same command, or maybe previous version trimmed -PVk -K output - I have a script I use to selectively copy Exif metadata, and noticed that it started to fail with some files I was able to process before.

In case this may be useful to someone having the same issue: as a workaround, I am piping exiv2 -PVk output through sed to trim trailing whitespaces, result can be processed by exiv2 -m-:

exiv2 -PVk \
 -K Exif.Image.DateTime \
 -K Exif.Image.Make \
 -K Exif.Image.Model \
 -K Exif.Photo.LensMake \
 -K Exif.Photo.LensModel \
 source.jpg \
| sed -e 's/[[:space:]]*$//' \
| exiv2 -m- target.jpg

I was about to state, that it may not be a bug, but rather a feature. But found this on the manual:

In the file, any blank lines or additional white space is ignored and any lines beginning with a # are comments.

Also, from the same manual:

value The remaining text on the line is the value, and can optionally be enclosed in quotes (see Quotations with 'modify' commands). For Ascii, XmpAlt, XmpBag, XmpSeq and XmpText, the value is optional which is equivalent to an empty value ("")

So, it's seams like expected behavior is not to differentiate between the following of examples You've provided:

exiv2 -M "set Exif.Photo.LensMake Ascii " image.jpg

exiv2 -M "set Exif.Photo.LensMake Ascii" image.jpg

Relevant (I guess) parts of the source: 1. https://github.com/Exiv2/exiv2/blob/main/app/exiv2.cpp#L1234-L1257 (reading the lines) 2. https://github.com/Exiv2/exiv2/blob/main/app/exiv2.cpp#L1302-L1426 (parsing the line) 3. https://github.com/Exiv2/exiv2/blob/main/app/exiv2.cpp#L1363-L1405 (parsing the value, I guess the culprit is here)

Thanks for confirming my assumptions.

Whatever the parsing rules are, I think it is a fair expectation that exiv2 -m should be able to consume what exiv2 -Pvk produces.

NefDecoder::gammaCurve(): drop support for unused modes (i.e. other than 1)

Mode=0 seems to be passthrough in dcraw, while mode=2 seems to be for some gamma post-processing when writing some special ppm tiffs.

Neither of the cases happens for us, and if it does, it can be resurrects.

OlympusDecompressor: simplify difference of signedness check

https://godbolt.org/z/7Yd34nhd3 https://alive2.llvm.org/ce/z/H9a-52

Use if constexpr where appropriate

Mass string-replace + https://godbolt.org/z/hzPjWhvo6 + manual fixes

Resolves https://github.com/darktable-org/darktable/issues/14082

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (7b2ba51) 58.75% compared to head (abe414e) 58.75%. Report is 21 commits behind head on develop.

@@ Coverage Diff @@
## develop #559 +/- ##
========================================
 Coverage 58.75% 58.75% 
========================================
 Files 235 235 
 Lines 14017 14017 
 Branches 1935 1935 
========================================
 Hits 8235 8235 
 Misses 5665 5665 
 Partials 117 117 
Flag Coverage Δ
benchmarks 8.40% <ø> (ø)
integration 47.40% <ø> (ø)
linux 56.85% <ø> (ø)
macOS 18.80% <ø> (ø)
rpu_u 47.40% <ø> (ø)
unittests 17.51% <ø> (ø)
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@kmilos @kofa73 thank you!

Texts are taken from Nikon NX Studio. NX Studio shows sligthly different texts for Exif.NikonAf.AFAreaMode, which is already translated in exiv2, e.g. "Single-point AF" instead of "Single Area (wide) (4)". This could be changed, but as the texts are not misleading I prefer to avoid, that users are irritated by a change.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (e9ba894) 63.89% compared to head (f2a28af) 63.89%. Report is 4 commits behind head on main.

@@ Coverage Diff @@
## main #2835 +/- ##
=======================================
 Coverage 63.89% 63.89% 
=======================================
 Files 103 103 
 Lines 22361 22361 
 Branches 10861 10861 
=======================================
 Hits 14288 14288 
 Misses 5855 5855 
 Partials 2218 2218 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Note that there might be more than one way to translate these: https://exiftool.org/TagNames/Nikon.html#AFInfo2

@kmilos thanks for your hint, the knowledge visible in https://exiftool.org/TagNames is impressive. Will consider this and prepare a pull request to incorporate this.

Merge pull request #558 from LebedevRI/fp-casts

Clang: enable -Wconversion

Codecov Report

Attention: 89 lines in your changes are missing coverage. Please review.

Comparison is base (818e55c) 58.80% compared to head (cd66bbf) 58.75%.

Files Patch % Lines
src/librawspeed/decoders/NefDecoder.cpp 0.00% 46 Missing :warning:
src/librawspeed/common/DngOpcodes.cpp 0.00% 12 Missing :warning:
src/librawspeed/decoders/DngDecoder.cpp 38.46% 8 Missing :warning:
src/utilities/identify/rawspeed-identify.cpp 0.00% 8 Missing :warning:
src/librawspeed/common/RawImageDataFloat.cpp 0.00% 4 Missing :warning:
src/librawspeed/common/RawImageDataU16.cpp 0.00% 3 Missing :warning:
src/librawspeed/decoders/MosDecoder.cpp 0.00% 3 Missing :warning:
src/librawspeed/decoders/CrwDecoder.cpp 66.66% 2 Missing :warning:
src/librawspeed/common/Spline.h 0.00% 1 Missing :warning:
src/librawspeed/decoders/ThreefrDecoder.cpp 50.00% 1 Missing :warning:
... and 1 more
@@ Coverage Diff @@
## develop #558 +/- ##
===========================================
- Coverage 58.80% 58.75% -0.06% 
===========================================
 Files 235 235 
 Lines 13989 14017 +28 
 Branches 1936 1935 -1 
===========================================
+ Hits 8226 8235 +9 
- Misses 5646 5665 +19 
 Partials 117 117 
Flag Coverage Δ
benchmarks 8.40% <2.54%> (-0.01%) :arrow_down:
integration 47.40% <25.92%> (-0.03%) :arrow_down:
linux 56.85% <26.36%> (-0.04%) :arrow_down:
macOS 18.80% <2.56%> (-0.03%) :arrow_down:
rpu_u 47.40% <25.92%> (-0.03%) :arrow_down:
unittests 17.51% <0.00%> (-0.04%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Clang: finally enable -Wdouble-promotion

NefDecoder::DecodeNikonSNef() is weird, almost certainly the computations should be done in float there.

Merge pull request #557 from LebedevRI/truncation

Clang: finally enable -Wshorten-64-to-32

Codecov Report

Attention: 96 lines in your changes are missing coverage. Please review.

Comparison is base (3370338) 58.85% compared to head (baab5ca) 58.80%.

Files Patch % Lines
src/librawspeed/common/DngOpcodes.cpp 0.00% 4 Missing :warning:
fuzz/librawspeed/decompressors/Cr2Decompressor.cpp 0.00% 3 Missing :warning:
...zz/librawspeed/decompressors/LJpegDecompressor.cpp 0.00% 3 Missing :warning:
src/librawspeed/decoders/OrfDecoder.cpp 0.00% 3 Missing :warning:
src/librawspeed/decoders/RawDecoder.cpp 0.00% 3 Missing :warning:
fuzz/librawspeed/codes/PrefixCodeDecoder/Common.h 0.00% 2 Missing :warning:
fuzz/librawspeed/codes/PrefixCodeDecoder/Dual.cpp 0.00% 2 Missing :warning:
fuzz/librawspeed/codes/PrefixCodeDecoder/Solo.cpp 0.00% 2 Missing :warning:
fuzz/librawspeed/common/DngOpcodes.cpp 0.00% 2 Missing :warning:
fuzz/librawspeed/decoders/TiffDecoders/main.cpp 0.00% 2 Missing :warning:
... and 38 more
@@ Coverage Diff @@
## develop #557 +/- ##
===========================================
- Coverage 58.85% 58.80% -0.06% 
===========================================
 Files 235 235 
 Lines 13898 13989 +91 
 Branches 1936 1936 
===========================================
+ Hits 8180 8226 +46 
- Misses 5601 5646 +45 
 Partials 117 117 
Flag Coverage Δ
benchmarks 8.41% <23.63%> (+0.09%) :arrow_up:
integration 47.43% <28.48%> (-0.04%) :arrow_down:
linux 56.89% <65.82%> (+0.07%) :arrow_up:
macOS 18.83% <36.44%> (+0.15%) :arrow_up:
rpu_u 47.43% <28.48%> (-0.04%) :arrow_down:
unittests 17.54% <17.27%> (+<0.01%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

This is an automatic backport of pull request #2833 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (72129e7) 63.99% compared to head (fdf49b5) 63.99%. Report is 4 commits behind head on 0.28.x.

@@ Coverage Diff @@
## 0.28.x #2834 +/- ##
=======================================
 Coverage 63.99% 63.99% 
=======================================
 Files 103 103 
 Lines 22338 22338 
 Branches 10821 10821 
=======================================
 Hits 14296 14296 
 Misses 5818 5818 
 Partials 2224 2224 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

github CI: new vmactions needs ubuntu

Signed-off-by: Rosen Penev

Bump vmactions/freebsd-vm from 0 to 1

Bumps vmactions/freebsd-vm from 0 to 1. - Release notes - Commits


updated-dependencies: - dependency-name: vmactions/freebsd-vm dependency-type: direct:production update-type: version-update:semver-major ...

Signed-off-by: dependabot[bot]

Merge remote-tracking branch 'upstream/pr/556' into develop

  • upstream/pr/556: Clang: finally enable -Wimplicit-int-conversion, and launder via implicit_cast<>() Endianness: don't even try to byteswap a byte

Codecov Report

Attention: 32 lines in your changes are missing coverage. Please review.

Comparison is base (f4d0d5d) 58.86% compared to head (c12e809) 58.85%.

Files Patch % Lines
src/librawspeed/decoders/IiqDecoder.cpp 14.28% 6 Missing :warning:
src/librawspeed/codes/PrefixCodeLUTDecoder.h 37.50% 5 Missing :warning:
...awspeed/decompressors/UncompressedDecompressor.cpp 20.00% 4 Missing :warning:
src/librawspeed/decompressors/Cr2LJpegDecoder.cpp 0.00% 3 Missing :warning:
src/librawspeed/adt/Casts.h 50.00% 2 Missing :warning:
src/librawspeed/codes/BinaryPrefixTree.h 0.00% 2 Missing :warning:
src/librawspeed/decoders/NefDecoder.cpp 0.00% 2 Missing :warning:
src/librawspeed/decoders/OrfDecoder.cpp 0.00% 2 Missing :warning:
src/librawspeed/decoders/Cr2Decoder.cpp 0.00% 1 Missing :warning:
src/librawspeed/decompressors/CrwDecompressor.cpp 80.00% 1 Missing :warning:
... and 4 more
@@ Coverage Diff @@
## develop #556 +/- ##
===========================================
- Coverage 58.86% 58.85% -0.02% 
===========================================
 Files 234 235 +1 
 Lines 13875 13898 +23 
 Branches 1935 1936 +1 
===========================================
+ Hits 8168 8180 +12 
- Misses 5590 5601 +11 
 Partials 117 117 
Flag Coverage Δ
benchmarks 8.31% <4.61%> (+<0.01%) :arrow_up:
integration 47.46% <74.59%> (+0.02%) :arrow_up:
linux 56.81% <76.80%> (+<0.01%) :arrow_up:
macOS 18.68% <4.80%> (-0.04%) :arrow_down:
rpu_u 47.46% <74.59%> (+0.02%) :arrow_up:
unittests 17.53% <8.46%> (-0.02%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Endianness: don't even try to byteswap a byte

Yay for C implicit promotion.

Bump actions/github-script from 6 to 7

Bumps actions/github-script from 6 to 7. - Release notes - Commits


updated-dependencies: - dependency-name: actions/github-script dependency-type: direct:production update-type: version-update:semver-major ...

Signed-off-by: dependabot[bot]

Found in recent Nikon Z f raw.pixls.us sample. @boardhead

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (e9ba894) 63.89% compared to head (8646eb1) 63.89%.

@@ Coverage Diff @@
## main #2833 +/- ##
=======================================
 Coverage 63.89% 63.89% 
=======================================
 Files 103 103 
 Lines 22361 22361 
 Branches 10861 10861 
=======================================
 Hits 14288 14288 
 Misses 5855 5855 
 Partials 2218 2218 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

@Mergifyio backport 0.28.x

backport 0.28.x

✅ Backports have been created

Commit a8c3455e5cd7ee65acc5f398581e1386f7df5108 and commit eb05551ed2d21079299f2f4da2f463df6857b884 changed the target of the exiv2 library (exiv2lib), exporting it in the Exiv2 namespace, so making it usable as Exiv2::exiv2lib instead. An ALIAS to exiv2lib was added, however cmake does not install or export ALIAS targets [1].

Hence, restore compatibility with the existing cmake users of exiv2: manually create an ALIAS target in the cmake config files after all the targets are loaded and checked.

[1] https://cmake.org/cmake/help/latest/command/add_library.html

Tested with digikam and photoqt, which refer to exiv2lib in their cmake build systems.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (f551534) 63.88% compared to head (1d01500) 63.88%.

@@ Coverage Diff @@
## main #2832 +/- ##
=======================================
 Coverage 63.88% 63.88% 
=======================================
 Files 103 103 
 Lines 22381 22381 
 Branches 10873 10873 
=======================================
 Hits 14297 14297 
 Misses 5862 5862 
 Partials 2222 2222 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

ping @Ryanf55

ping @Ryanf55

This is an issue if we lose the original target name; those changes were not intended to be breaking. But, I'm not sure this is the right fix. I can try an alternative fix. The targets are supposed to be generated automatically by the config script

Once we get the fix in, It would be prudent to add a test in CI to verify the targets the users expect exist.

Would be possible to backport this to the 0.28.x branch, please?

@mergifyio backport 0.28.x

backport 0.28.x

✅ Backports have been created

Codecov Report

Attention: 5 lines in your changes are missing coverage. Please review.

Comparison is base (91e32b5) 58.87% compared to head (031acbb) 58.86%.

Files Patch % Lines
src/librawspeed/common/DngOpcodes.cpp 0.00% 4 Missing :warning:
...rc/librawspeed/decompressors/Cr2DecompressorImpl.h 83.33% 1 Missing :warning:
@@ Coverage Diff @@
## develop #555 +/- ##
===========================================
- Coverage 58.87% 58.86% -0.01% 
===========================================
 Files 234 234 
 Lines 13874 13875 +1 
 Branches 1935 1935 
===========================================
 Hits 8168 8168 
- Misses 5589 5590 +1 
 Partials 117 117 
Flag Coverage Δ
benchmarks 8.31% <17.64%> (+0.02%) :arrow_up:
integration 47.43% <80.00%> (-0.01%) :arrow_down:
linux 56.80% <75.00%> (-0.01%) :arrow_down:
macOS 18.71% <13.33%> (+0.02%) :arrow_up:
rpu_u 47.43% <80.00%> (-0.01%) :arrow_down:
unittests 17.55% <17.64%> (-0.01%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Merge pull request #554 from LebedevRI/misc

Some more misc C++20 cleanups

Codecov Report

Attention: 37 lines in your changes are missing coverage. Please review.

Comparison is base (cd0e0c4) 58.86% compared to head (62c3af9) 58.86%.

Files Patch % Lines
src/librawspeed/decoders/IiqDecoder.cpp 16.66% 5 Missing :warning:
src/librawspeed/tiff/TiffEntry.cpp 64.28% 5 Missing :warning:
src/librawspeed/decoders/DngDecoder.cpp 42.85% 4 Missing :warning:
src/librawspeed/decoders/NakedDecoder.cpp 0.00% 4 Missing :warning:
src/librawspeed/tiff/CiffEntry.cpp 33.33% 4 Missing :warning:
src/librawspeed/decoders/Rw2Decoder.cpp 25.00% 3 Missing :warning:
src/librawspeed/metadata/Camera.cpp 82.35% 3 Missing :warning:
src/librawspeed/common/RawImage.cpp 33.33% 2 Missing :warning:
src/librawspeed/decoders/RawDecoder.cpp 33.33% 2 Missing :warning:
src/librawspeed/decompressors/VC5Decompressor.cpp 90.00% 2 Missing :warning:
... and 3 more
@@ Coverage Diff @@
## develop #554 +/- ##
========================================
 Coverage 58.86% 58.86% 
========================================
 Files 234 234 
 Lines 13880 13871 -9 
 Branches 1935 1935 
========================================
- Hits 8170 8165 -5 
+ Misses 5593 5589 -4 
 Partials 117 117 
Flag Coverage Δ
benchmarks 8.26% <14.50%> (-0.02%) :arrow_down:
integration 47.44% <66.40%> (+<0.01%) :arrow_up:
linux 56.80% <70.99%> (-0.01%) :arrow_down:
macOS 18.69% <27.11%> (+<0.01%) :arrow_up:
rpu_u 47.44% <66.40%> (+<0.01%) :arrow_up:
unittests 17.53% <25.95%> (-0.02%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

Describe the bug

The use of std:regex in src/datasets.cpp in IptcKey::decomposeKey causes severe slowdowns (minutes) and the use of huge amounts of memory (gigabytes). This is called in GIMP's dev version when built on Windows under the UCRT64 profile of MSYS2 through the gexiv2 library (0.14.2).

To Reproduce

  1. On Windows, under MSYS2 UCRT64, install package mingw-w64-ucrt-x86_64-gimp3 (dev version 2.99.16) or self build current master.
  2. Start gimp-2.99 and open an image that has IPTC metadata, for testing I used e.g. https://github.com/psd-tools/psd-tools/blob/main/tests/psd_files/16bit5x5.psd.
  3. Observe the very long waiting time and with e.g. process explorer you can see huge amounts of memory being used. Steps to reproduce the behavior:

Expected behavior

Expected is quick loading and a reasonable amount of memory used.

Desktop (please complete the following information):

  • OS and version: Windows 10 Home, 64-bit
  • Exiv2 version and source: mingw-w64-ucrt-x86_64-exiv2 0.28.1-1 from the MSYS2 repository
  • Compiler and version: whatever is used by default on up-to-date MSYS2 UCRT64 repository
  • Compilation mode and/or compiler flags: Release

Additional context

This issue is discussed in this GIMP MR to evaluate moving from MINGW64 to UCRT64 on Windows.

As I describe there, I added print statements to verify where the slowdown occurred, which is on the following line in decomposeKey:

static const std::regex re(R"((\w+)(\.\w+){2})");

A reply in that issue suggests that std:regex on Windows is indeed buggy and should better not be used with references to gcc and Inkscape issues.

mhm. Do seem like a bigger issue at hand.

I wonder, if we could try and reduce the grammar in use a bit (do look like a sane thing to try) do to the following:

"By default, if no grammar is specified, ECMAScript is assumed. Only one grammar may be specified." [1]

I wonder if that could/would/should change anything.

[1] https://learn.microsoft.com/en-us/cpp/standard-library/regular-expressions-cpp?view=msvc-170

Also, yet, another interesting thread on GCC side of things: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98723

We may fix the issue in GCC/libstdc++ then. Taking in account the return value / errno in calls to strxfrm() should be enough.

and now it gets stuck on a different image with IPTC DateCreated/TimeCreated tags; looks like they also use regex.

Should this be reopened, or is a new issue better, or wait for a fix in libstdc++?

new issue probably. would be great to find which regex causes problems.

We don't want the dimensions of the embedded preview for RAF raw files, but sensor (or cropped) dimensions like it is the case for other raw formats, e.g. Panasonic RW2.

Edit: also leave room and hooks for decoding the proprietary RAF metadata structure for older cameras not containing the FujifilmIFD.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Attention: 6 lines in your changes are missing coverage. Please review.

Comparison is base (38adc8e) 63.89% compared to head (6a9a50a) 63.88%.

:exclamation: Current head 6a9a50a differs from pull request most recent head 34f6b84. Consider uploading reports for the commit 34f6b84 to get more accurate results

Files Patch % Lines
src/rafimage.cpp 0.00% 6 Missing :warning:
@@ Coverage Diff @@
## main #2830 +/- ##
==========================================
- Coverage 63.89% 63.88% -0.01% 
==========================================
 Files 103 103 
 Lines 22377 22365 -12 
 Branches 10871 10863 -8 
==========================================
- Hits 14297 14288 -9 
- Misses 5858 5859 +1 
+ Partials 2222 2218 -4 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

rebase I think.

@Mergifyio backport 0.28.x

backport 0.28.x

✅ Backports have been created

Bumps vmactions/freebsd-vm from 0 to 1.

Sourced from vmactions/freebsd-vm's releases .

Use libvirt

Switch to use libvirt on Ubuntu.

please change the runs-on from macOS to Ubuntu. And use this new version v1

This is a totally rewritten version, based on Qemu and Libvirt. It's more stable and faster.

Support 13.2

fix for pytesseract

Improve rsync

Add reconnect for sshfs

Show files tree for debug

Fix for 13.0 bugs

v0.2.5

Minor, just using vbox v0.0.1

Fix sshfs to work in MacOS 12

Fix ntpd random issue

Update Major release version

Please use the major release version v0 instead.

Just polish the output

Minor, Just polish the output

Speed up the booting time

Just Speed up booting time

Fix bug for execSSH

fix bug vmactions/freebsd-vm#56

Support "release" config to select the OS version

Support 12.3, 13.0 and 13.1

Upgrade to FreeBSD 13.1

support copyback option

fix bugs

... (truncated)

d44bf83 Generated from https://github.com/vmactions/base-vm

3de75cc Generated from https://github.com/vmactions/base-vm

8d531d4 Generated from https://github.com/vmactions/base-vm

0a0faaa Generated from https://github.com/vmactions/base-vm

c6864ce Generated from https://github.com/vmactions/base-vm

0f5a85a fix dir

c700764 Generated from https://github.com/vmactions/base-vm

d2746be Generated from https://github.com/vmactions/base-vm

2eb64c6 Generated from https://github.com/vmactions/base-vm

ecc64e6 Generated from https://github.com/vmactions/base-vm

Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


You can trigger Dependabot actions by commenting on this PR: - @dependabot rebase will rebase this PR - @dependabot recreate will recreate this PR, overwriting any edits that have been made to it - @dependabot merge will merge this PR after your CI passes on it - @dependabot squash and merge will squash and merge this PR after your CI passes on it - @dependabot cancel merge will cancel a previously requested merge and block automerging - @dependabot reopen will reopen this PR if it is closed - @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - @dependabot show ignore conditions will show all of the ignore conditions of the specified dependency - @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (e9ba894) 63.89% compared to head (c507f79) 63.89%. Report is 1 commits behind head on main.

@@ Coverage Diff @@
## main #2829 +/- ##
=======================================
 Coverage 63.89% 63.89% 
=======================================
 Files 103 103 
 Lines 22361 22361 
 Branches 10861 10861 
=======================================
 Hits 14288 14288 
 Misses 5855 5855 
 Partials 2218 2218 

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

huh. vmactions used to require macOS. Interesting...

Bumps actions/github-script from 6 to 7.

Sourced from actions/github-script's releases .

v7.0.0

What's Changed

Add base-url option by @​robandpdx in actions/github-script#429

Expose async-function argument type by @​viktorlott in actions/github-script#402 , see for details https://github.com/actions/github-script#use-scripts-with-jsdoc-support

Update dependencies and use Node 20 by @​joshmgross in actions/github-script#425

New Contributors

@​navarroaxel made their first contribution in actions/github-script#285

@​robandpdx made their first contribution in actions/github-script#429

@​viktorlott made their first contribution in actions/github-script#402

Full Changelog : https://github.com/actions/github-script/compare/v6.4.1...v7.0.0

v6.4.1

What's Changed

Add @​octokit/plugin-request-log , to produce debug output for requests by @​mjpieters in actions/github-script#358

fix input handling by @​mjpieters in actions/github-script#357

Remove unused dependencies by @​mjpieters in actions/github-script#356

Default debug to current runner debug state by @​mjpieters in actions/github-script#363

New Contributors

@​mjpieters made their first contribution in actions/github-script#358

Full Changelog : https://github.com/actions/github-script/compare/v6.4.0...v6.4.1

v6.4.0

What's Changed

Bump json5 from 2.1.3 to 2.2.3 by @​dependabot in actions/github-script#319

Bump minimatch from 3.0.4 to 3.1.2 by @​dependabot in actions/github-script#320

Add node-fetch by @​danmichaelo in actions/github-script#321

New Contributors

@​jongwooo made their first contribution in actions/github-script#313

@​austinvazquez made their first contribution in actions/github-script#306

@​danmichaelo made their first contribution in actions/github-script#321

Full Changelog : https://github.com/actions/github-script/compare/v6.3.3...v6.4.0

v6.3.3

What's Changed

Update @actions/glob to 0.3.0 by @​nineinchnick in actions/github-script#279

New Contributors

@​nineinchnick made their first contribution in actions/github-script#279

Full Changelog : https://github.com/actions/github-script/compare/v6.3.2...v6.3.3

v6.3.2

What's Changed

Update @​actions/core to 1.10.0 by @​rentziass in actions/github-script#295

... (truncated)

e69ef54 Merge pull request #425 from actions/joshmgross/node-20

ee0914b Update licenses

d6fc56f Use @types/node for Node 20

384d6cf Fix quotations in tests

8472492 Only validate GraphQL previews

84903f5 Remove node-fetch from type

5349cf9 Merge branch 'main' into joshmgross/node-20

ecae9eb Merge pull request #402 from typed-actions/export-types

044ebbb Merge branch 'main' into export-types

6b5d3ea Merge pull request #429 from robandpdx/add-base-url-option

Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


You can trigger Dependabot actions by commenting on this PR: - @dependabot rebase will rebase this PR - @dependabot recreate will recreate this PR, overwriting any edits that have been made to it - @dependabot merge will merge this PR after your CI passes on it - @dependabot squash and merge will squash and merge this PR after your CI passes on it - @dependabot cancel merge will cancel a previously requested merge and block automerging - @dependabot reopen will reopen this PR if it is closed - @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - @dependabot show ignore conditions will show all of the ignore conditions of the specified dependency - @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Merging #2828 (a5b1f95) into main (e9ba894) will not change coverage. The diff coverage is n/a.

@@ Coverage Diff @@
## main #2828 +/- ##
=======================================
 Coverage 63.89% 63.89% 
=======================================
 Files 103 103 
 Lines 22361 22361 
 Branches 10861 10861 
=======================================
 Hits 14288 14288 
 Misses 5855 5855 
 Partials 2218 2218 

Apply suggestions from code review

Co-authored-by: Kevin Backhouse

Use appropriate print format specifiers and constants

Merge pull request #553 from jnerin/patch-1

Update cameras.xml with values for Canon EOS 77D

Fixes: https://github.com/darktable-org/darktable/issues/15636

Took pictures at all ISOs (100, 125, 160, 200, 250, 320, 400, 500, 640, 800, 1000, 1250, 1600, 2000, 2500, 3200, 4000, 5000, 6400, 8000, 10000, 12800, 16000, 20000, 25600, 51200), converted them to DNG and processed them with dngmeta.rb to obtain updated values.

As mentioned in https://github.com/darktable-org/darktable/issues/15636#issuecomment-1807193990, we really should read these from EXIF.

@jnerin thank you!

Update cameras.xml with values for Canon EOS 77D

Fixes: https://github.com/darktable-org/darktable/issues/15636

Took pictures at all ISOs (100, 125, 160, 200, 250, 320, 400, 500, 640, 800, 1000, 1250, 1600, 2000, 2500, 3200, 4000, 5000, 6400, 8000, 10000, 12800, 16000, 20000, 25600, 51200), converted them to DNG and processed them with dngmeta.rb to obtain updated values.

This is an automatic backport of pull request #2826 done by Mergify.


More conditions and actions can be found in the documentation.

You can also trigger Mergify actions by commenting on this pull request:

  • @Mergifyio refresh will re-evaluate the rules
  • @Mergifyio rebase will rebase this PR on its base branch
  • @Mergifyio update will merge the base branch into this PR
  • @Mergifyio backport will backport this PR on `` branch

Additionally, on Mergify dashboard you can:

  • look at your merge queues
  • generate the Mergify configuration with the config editor.

Finally, you can contact us on https://mergify.com

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Merging #2827 (b6e1674) into 0.28.x (72129e7) will not change coverage. Report is 3 commits behind head on 0.28.x. The diff coverage is n/a.

@@ Coverage Diff @@
## 0.28.x #2827 +/- ##
=======================================
 Coverage 63.99% 63.99% 
=======================================
 Files 103 103 
 Lines 22338 22338 
 Branches 10821 10821 
=======================================
 Hits 14296 14296 
 Misses 5818 5818 
 Partials 2224 2224 
Files Coverage Δ
src/sonymn_int.cpp 79.90% <ø> (ø)

tests: fix parsing w/ path containing "-pa" or "-pS"

The OutputTagExtract test case for tiff_test runs exiv2 two times, with "-pa" and "-pS", and parses their outputs. To know which output to parse, it checks for "-pa" and "-pS" in the string of the command being run; considering that the command string contains the full path to the test data, which is a subdirectory of the sources, this means that a wrong parser will be used in case the full source path contains any of "-pa" or "-pS" (e.g. "/build/some-path/exiv2/...").

Cheap fix for this: since "-pa"/"-pS" are options in the command string, check for them using spaces around.

Merge pull request #552 from LebedevRI/new-with-alignment

Try using C++17's aligned new, maybe works natively on MSYS?

Description of the bug

I'm using release 0.0.0+394~g06d9cda32 of Ansel, built from sources on Debian GNU/Linux "testing". When looking at the clones management portlet, I don't have action button to create a new clone:

image

Expected behavior

Have buttons to create new clones (from original or as a copy of the current development image).

System

  • darktable version : 0.0.0+394~g06d9cda32
  • OS : Debian GNU/Linux - kernel 6.5.0
  • Linux - Distro : Debian Trixie (testing)
  • Memory : 24 GB
  • Graphics card : AMD Radeon R9 380
  • Graphics driver : AMD Gpupro
  • OpenCL installed : no
  • OpenCL activated : no
  • Xorg : Wayland
  • Desktop : GNOME 44.5

Hey @tflorac,

those actions have been moved to "Edit" in the global menu.

Hey @lukadh , Do you know why some commands (like history compression for example) are still available in the menu and in the side panel (which can be quite practical in my own opinion), while other ones are only in the menu?

Codecov Report

Attention: 3 lines in your changes are missing coverage. Please review.

Comparison is base (a92735d) 58.86% compared to head (ca9cf9e) 58.86%.

@@ Coverage Diff @@
## develop #552 +/- ##
===========================================
- Coverage 58.86% 58.86% -0.01% 
===========================================
 Files 234 234 
 Lines 13892 13880 -12 
 Branches 1938 1935 -3 
===========================================
- Hits 8178 8170 -8 
+ Misses 5596 5593 -3 
+ Partials 118 117 -1 
Flag Coverage Δ
benchmarks 8.28% <40.00%> (-0.07%) :arrow_down:
integration 47.44% <40.00%> (-0.02%) :arrow_down:
linux 56.80% <40.00%> (-0.01%) :arrow_down:
macOS 18.68% <0.00%> (+0.01%) :arrow_up:
rpu_u 47.44% <40.00%> (-0.02%) :arrow_down:
unittests 17.54% <0.00%> (+0.01%) :arrow_up:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
src/librawspeed/adt/AlignedAllocator.h 50.00% <40.00%> (-6.00%) :arrow_down:

... and 1 file with indirect coverage changes

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

The OutputTagExtract test case for tiff_test runs exiv2 two times, with -pa and -pS, and parses their outputs. To know which output to parse, it checks for -pa and -pS in the string of the command being run; considering that the command string contains the full path to the test data, which is a subdirectory of the sources, this means that a wrong parser will be used in case the full source path contains any of -pa or -pS (e.g. /build/some-path/exiv2/...).

Cheap fix for this: since -pa/-pS are options in the command string, check for them using spaces around.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Merging #2826 (db47dfb) into main (9002797) will not change coverage. The diff coverage is n/a.

@@ Coverage Diff @@
## main #2826 +/- ##
=======================================
 Coverage 64.00% 64.00% 
=======================================
 Files 103 103 
 Lines 22324 22324 
 Branches 10827 10827 
=======================================
 Hits 14288 14288 
 Misses 5818 5818 
 Partials 2218 2218 

@Mergifyio backport 0.28.x

backport 0.28.x

✅ Backports have been created

Thanks!

Handle absolute and relative paths for pkgconf generation (#2118)

  • Handle absolute and relative paths for pkgconf generation

  • CI: ensure MSYS2 Python is used

I have generated a distortion and TCA profiles for my Canon IXY 220F for inclusion upstream. Please see attached. Thank you :)

I uploaded a sample to raw.pixls.us, but can provide more samples with straight lines etc. if desired.

lensfun_calibration.zip

I have a DNG generated from Halide IOS camera appl.

I use exiftool to update the previewimage following advice from Phil Harvey replacing the previewimage with a foo.jpg. I can subsequently pull the image using exiftool and its identical to the preview image.

However, exiftool and exiv2 (0.28.1) have inconsistencies:

  • exiv2 -pt reports a Exif.Image.StripByteCounts (via exiv2 -t ...) that is incorrect - I am expect this to be the exact size matching foo.jpg, the inserted preview image
  • exiv2 -pp reports the preview image is a image/tiff (expect jpeg)
  • exiv2 -ep1 extracts a corrupt image

Running with exiv2 0.28.1 built from source on linux/x64 and exiftool 12.69

To Reproduce

$ exiv2 -pp IMG_6430.DNG
Preview 1: image/tiff, 512x384 pixels, 72598 bytes

$ exiv2 -pt IMG_6430.DNG
...
Exif.Image.StripByteCounts Long 1 72470
...

$ exiftool -v2 IMG_6430.DNG
..
 | 12) PreviewImageLength = 72470
 | - Tag 0x0117 (4 bytes, int32u[1])
...
  • Get a preview image for testing
# via ImageMagik:
$ convert -quality 40 -size 3000x2000 plasma:fractal foo.jpg

$ ls -l foo.jpg
rw-r--r-- 1 ray ray 188855 Nov 10 20:04 random.jpg
  • Update preivew image in DNG `` $ exiftool \ "-previewImage<=foo.jpg" \ -tagsfromfile foo.jpg "\ -ifd0:imagewidth## NOTICE image size reported as 188984 vs actual 188855 (+118bytes diff)`

    ## BUT looking at -pt it CORRECTLY reports size

So the -pt reports the actual size of the JPEG byte stream, and that is in agreement and what matters. The +118 bytes is the size of the TIFF preview image one would extract - the extra bytes are for the TIFF container that wraps that JPEG payload.

Not sure if there's an actual bug here, just difference in behaviour: exiv2 decides to treat DNG/TIFF subimages as still valid TIFF images when possible, and doesn't necessary "unwrap" them unless they're specifically designated as preview/thumbnail images (using e.g. the JPEGInterchangeFormatLength tag instead of StripByteCounts, which in contrast seems to make LoaderTiff the priority loader).

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs Vulnerability A 0 Vulnerabilities Security Hotspot A 0 Security Hotspots Code Smell A 0 Code Smells

No Coverage information No Coverage information No Duplication information No Duplication information

Going by https://libopenraw.freedesktop.org/formats/raf/ some bytes after the first meta and CFA offset and length were unknown. @hfiguiere

However, I think these are to describe the file layout when the RAF has two sub-images (like the ones from EXR sensors), so I posit it is a specification of some interleaving scheme w/ skip? (haven't seen anything other than 0), chunk size, and stride....

For the FinePix F700 sample from RPU, this now outputs:

STRUCTURE OF RAF FILE: Fujifilm - FinePix F700 - double width 16-bit unpacked little-endian (4_3).RAF
 Address | Length | Payload
 0 | 16 | magic : FUJIFILMCCD-RAW
 16 | 4 | data1 : 0201
 20 | 8 | data2 : FF390101
 28 | 32 | camera : FinePix F700
 60 | 4 | version : 0264
 64 | 20 | unknown : ...................
 84 | 4 | JPEG offset : 148
 88 | 4 | JPEG length : 482850
 92 | 4 | meta offset1 : 483000
 96 | 4 | meta length1 : 2148
 100 | 4 | CFA offset1 : 487296
 104 | 4 | CFA length1 : 6382592
 108 | 4 | CFA skip1 : 0
 112 | 4 | CFA chunk1 : 2944
 116 | 4 | CFA stride1 : 5888
 120 | 4 | meta offset2 : 485148
 124 | 4 | meta length2 : 2148
 128 | 4 | CFA offset2 : 490240
 132 | 4 | CFA length2 : 6382592
 136 | 4 | CFA skip2 : 0
 140 | 4 | CFA chunk2 : 2944
 144 | 4 | CFA stride2 : 5888
 148 | 482850 | JPEG data : ....%"Exif..II*
 483000 | 2148 | meta data1 : ...).....x......
 485148 | 2148 | meta data2 : ...).....x......
 487296 | 6382592 | CFA data1 : ...............
 490240 | 6382592 | CFA data2 : ...............

Note how the offset of the 2nd CFA data payload is one CFA chunk size of 2944 after the 1st one, giving the hint about interleaved chunks.

NB: "CFA length" seems to be the effective one, after deinterleaving.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Merging #2824 (8979e51) into main (3e977c5) will decrease coverage by 0.11%. Report is 2 commits behind head on main. The diff coverage is 0.00%.

:exclamation: Current head 8979e51 differs from pull request most recent head 3536e0d. Consider uploading reports for the commit 3536e0d to get more accurate results

@@ Coverage Diff @@
## main #2824 +/- ##
==========================================
- Coverage 64.00% 63.89% -0.11% 
==========================================
 Files 103 103 
 Lines 22324 22361 +37 
 Branches 10827 10861 +34 
==========================================
 Hits 14288 14288 
- Misses 5818 5855 +37 
 Partials 2218 2218 
Files Coverage Δ
src/rafimage.cpp 17.15% <0.00%> (-3.15%) :arrow_down:

Thanks for all the changes @kevinbackhouse

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Merging #2823 (cf4bf25) into main (3e977c5) will not change coverage. The diff coverage is n/a.

@@ Coverage Diff @@
## main #2823 +/- ##
=======================================
 Coverage 64.00% 64.00% 
=======================================
 Files 103 103 
 Lines 22324 22324 
 Branches 10827 10827 
=======================================
 Hits 14288 14288 
 Misses 5818 5818 
 Partials 2218 2218 
Files Coverage Δ
src/sonymn_int.cpp 79.90% <ø> (ø)

@boardhead

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Merging #2822 (683ee64) into 0.28.x (72129e7) will not change coverage. Report is 2 commits behind head on 0.28.x. The diff coverage is n/a.

@@ Coverage Diff @@
## 0.28.x #2822 +/- ##
=======================================
 Coverage 63.99% 63.99% 
=======================================
 Files 103 103 
 Lines 22338 22338 
 Branches 10821 10821 
=======================================
 Hits 14296 14296 
 Misses 5818 5818 
 Partials 2224 2224 
Files Coverage Δ
src/sonymn_int.cpp 79.90% <ø> (ø)

Merge pull request #551 from LebedevRI/misc

Some misc C++20 cleanups

Codecov Report

Attention: 34 lines in your changes are missing coverage. Please review.

Comparison is base (4fa2d5f) 58.85% compared to head (d4fbebc) 58.88%.

@@ Coverage Diff @@
## develop #551 +/- ##
===========================================
+ Coverage 58.85% 58.88% +0.02% 
===========================================
 Files 234 234 
 Lines 13902 13887 -15 
 Branches 1942 1938 -4 
===========================================
- Hits 8182 8177 -5 
+ Misses 5599 5593 -6 
+ Partials 121 117 -4 
Flag Coverage Δ
benchmarks 8.35% <4.95%> (+0.16%) :arrow_up:
integration 47.45% <50.87%> (-0.02%) :arrow_down:
linux 56.80% <53.78%> (+<0.01%) :arrow_up:
macOS 18.66% <25.43%> (+0.04%) :arrow_up:
rpu_u 47.45% <50.87%> (-0.02%) :arrow_down:
unittests 17.53% <34.71%> (-0.54%) :arrow_down:
windows ∅ <ø> (∅)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
src/librawspeed/adt/BitIterator.h 0.00% <ø> (ø)
src/librawspeed/adt/NORangesSet.h 88.88% <100.00%> (+5.55%) :arrow_up:
src/librawspeed/adt/Point.h 72.15% <ø> (-0.69%) :arrow_down:
src/librawspeed/codes/AbstractPrefixCodeDecoder.h 54.05% <100.00%> (-1.21%) :arrow_down:
src/librawspeed/codes/HuffmanCode.h 75.40% <100.00%> (ø)
src/librawspeed/common/Common.h 87.95% <100.00%> (-0.15%) :arrow_down:
src/librawspeed/decoders/Cr2Decoder.cpp 56.36% <100.00%> (ø)
src/librawspeed/decoders/CrwDecoder.cpp 69.16% <100.00%> (ø)
...rawspeed/decompressors/AbstractDngDecompressor.cpp 24.21% <ø> (ø)
...c/librawspeed/decompressors/AbstractLJpegDecoder.h 86.66% <100.00%> (ø)
... and 35 more

... and 2 files with indirect coverage changes

:umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.

installed cmake module contains arch dependent library description. It should be installed not in DATADIR but in LIBDIR.

Signed-off-by: Tomasz Kłoczko (cherry picked from commit 3e977c5cf014750fa850fcbfb2715115c75e4610)

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Merging #2820 (0e668a5) into 0.28.x (72129e7) will not change coverage. Report is 1 commits behind head on 0.28.x. The diff coverage is n/a.

@@ Coverage Diff @@
## 0.28.x #2820 +/- ##
=======================================
 Coverage 63.99% 63.99% 
=======================================
 Files 103 103 
 Lines 22338 22338 
 Branches 10821 10821 
=======================================
 Hits 14296 14296 
 Misses 5818 5818 
 Partials 2224 2224 

Currently we require GCC 12.x or CLang 14.x to build rawspeed.

The Debian 11 release which has been released the Jun 10th 2023 has only GCC 10.x and CLang 13.x.

This means that we cannot compile darktable on a version which is about 5 months old.

This is really too strict. Can we keep compatibility with CLang 13.x to at least have a way to compile on Debian 11 and possibly other distribution on the same situation?

The Debian 11 release which has been released the Jun 10th 2023 has only GCC 10.x and CLang 13.x.

This is not true. 1. Debian 11 "bullseye" was released August 14th, 2021, i.e. 2 years ago. 2. Debian 12 "bookworm" was released June 10th, 2023, i.e. 5 months ago.

As per the supported compiler policy, that we have discussed a number of times elsewhere, this is working as intended, quote:

# We strive to keep the [darktable] software releases (but not necessarily
# development versions!) buildable with the versions of the dependencies
# provided out-of-the-box in the following distributions:
# * debian stable
# * latest(!) ubuntu LTS
# * oldest(*) maintained macOS release (assuming current cadence of
# one major macOS release per year, and 3 (three) year shelf-life,
# so last three releases are to be supported)

Debian 12 "bookworm" is stable, and it is supported. Debian 11 "bullseye" is now oldstable, and is not supported.

I see no option to set a star rating on pictures. It is not below the picture, and hotkeys 1-5 do also not work.

I have tried the most current appimage and built from source. I have also tried older appimages with the same result.

To Reproduce

  1. Load CR3 Raw
  2. No stars are shown

Screenshots

Screenshot_20231107_011115

System

  • darktable version : Ansel-06d9cda-x86_64
  • OS : Linux - kernel 6.5
  • Linux - Distro : Manjaro KDE
  • Memory : 32GB
  • Graphics card : AMD 6900XT

Go to the global menu. Display - Thumbnail overlays. If that solves your issue, please close it. Thanks!

Thank you, the stars are now showing. However, the hotkeys still do not work.

Edit: I also use Darktables 4.4 where the hotkeys work as expected, if that information is worth anything..

Okay, very strange behavior. I manually set a color label and star on an image, since then shortcuts started working.. There still seems to be a bug here, but I will close the issue.

Edit: I tried with a freshly installed AppImage and deleting the config folder. I could reproduce this error. I had to enable and disable a color filter before shortcuts started working.

Okay, very strange behavior. I manually set a color label and star on an image, since then shortcuts started working.. There still seems to be a bug here, but I will close the issue.

Edit: I tried with a freshly installed AppImage and deleting the config folder. I could reproduce this error. I had to enable and disable a color filter before shortcuts started working.

The whole lighttable is being rewritten to fix shortcuts and selection issues.

install exiv2 cmake module in LIBDIR

installed cmake module contains arch dependent library description. It should be installed not in DATADIR but in LIBDIR.

Signed-off-by: Tomasz Kłoczko

installed cmake module contains arch dependent library description. It should be installed not in DATADIR but in LIBDIR.

Review these changes using an interactive CodeSee Map

Legend

CMake will look for these files in multiple locations, so not an error per se, but I guess this might be a matter of preference...

Not sure why it was changed though, I think it was LIBDIR before 0.28.1?

See https://github.com/Exiv2/exiv2/pull/2703 @neheb @Ryanf55

Not sure why it was changed though, I think it was LIBDIR before 0.28.1?

In 0.28.0 was in libdir.

Go for it. This was a mistake on my end; seems like a majority are installed in lib not share. As stated, this doesn't affect behavior for users since both are supported by CMake.

On ubuntu 22:

$ find /usr/ -type f -name *Config.cmake | grep /usr/share | wc -l
11
$ find /usr/ -type f -name *Config.cmake | grep /usr/lib | wc -l
77

Thank you 👍

@mergify backport 0.28.x

Skip to content

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

{{ message }}

Exiv2

/

exiv2

Public

Notifications

Fork 273

Star 820

Image metadata library and tools

www.exiv2.org/

License

Licenses found

Unknown

LICENSE.txt

GPL-2.0

COPYING

820 stars

273 forks

Branches

Tags

Activity

Star

Notifications

Exiv2/exiv2

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

About

Image metadata library and tools

www.exiv2.org/

Topics

cli

exif

iptc

xmp

image-metadata

exif-interface

xmp-metadata

exif-metadata

iptc-metadata

Resources

Readme

License

Licenses found

Unknown

LICENSE.txt

GPL-2.0

COPYING

Security policy

Security policy

Activity

Custom properties

Stars

820 stars

Watchers

29 watching

Forks

273 forks

Report repository

Releases 59

Exiv2 Release v0.28.1

Latest

Nov 6, 2023

  • 58 releases

Packages 0

No packages published

Contributors 114

  • 100 contributors

Languages

C++ 80.1%

Python 17.4%

CMake 1.3%

C 0.8%

Meson 0.3%

Shell 0.1%

You can’t perform that action at this time.

Regression test for https://github.com/Exiv2/exiv2/security/advisories/GHSA-hrw9-ggg3-3r4r

Credit to OSS-Fuzz: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=61675 Avoid integer overflow in the calculation of available_out.

This is the same fix that I just merged into the 0.28.x branch: https://github.com/Exiv2/exiv2/commit/d8f82d5db1fed05a18aa0f84f1cc8899d011a18b

It fixes CVE-2023-44398.

Review these changes using an interactive CodeSee Map

Legend

Codecov Report

Merging #2818 (39e56d5) into main (8512c4f) will increase coverage by 0.01%. The diff coverage is 0.00%.

@@ Coverage Diff @@
## main #2818 +/- ##
==========================================
+ Coverage 63.98% 64.00% +0.01% 
==========================================
 Files 103 103 
 Lines 22324 22324 
 Branches 10827 10827 
==========================================
+ Hits 14284 14288 +4 
+ Misses 5824 5818 -6 
- Partials 2216 2218 +2 
Files Coverage Δ
src/bmffimage.cpp 73.93% <0.00%> (+0.81%) :arrow_up:

@neheb: do you know why the meson/FreeBSD check is so slow?

It runs in a VM. Setting that up is slow.

edit: no this is something else. I’ll look into it.