Tidy up logging and rework tests from scratch

This commit is contained in:
Javanaut
2026-04-09 12:46:24 +02:00
parent f9c8b8ac5e
commit 60ae58500a
84 changed files with 1283 additions and 187 deletions

5
.gitignore vendored
View File

@@ -1,4 +1,5 @@
__pycache__ __pycache__/
*.py[cod]
junk/ junk/
.vscode .vscode
.ipynb_checkpoints/ .ipynb_checkpoints/
@@ -12,4 +13,6 @@ bin/conversiontest.py
build/ build/
dist/ dist/
*.egg-info/ *.egg-info/
.venv/
venv/
.codex .codex

View File

@@ -8,6 +8,8 @@
- The biggest near-term wins are in startup cost, repeated subprocess work, repeated database query patterns, and general repo hygiene. - The biggest near-term wins are in startup cost, repeated subprocess work, repeated database query patterns, and general repo hygiene.
- This list is intentionally optimization-oriented rather than bug-oriented. Some items below also improve correctness or maintainability, but they were selected because they can reduce runtime cost, operator friction, or iteration overhead. - This list is intentionally optimization-oriented rather than bug-oriented. Some items below also improve correctness or maintainability, but they were selected because they can reduce runtime cost, operator friction, or iteration overhead.
- A first modern integration slice now exists under [`tests/integration/subtrack_mapping`](/home/osgw/.local/src/codex/ffx/tests/integration/subtrack_mapping). Remaining test-suite cleanup is now mostly about migrating and shrinking the legacy harness surface under [`tests/legacy`](/home/osgw/.local/src/codex/ffx/tests/legacy).
- FFX logger setup now reuses named handlers, and fallback logger access no longer mutates handlers in ordinary constructors and helpers.
## Focused Snapshot ## Focused Snapshot
@@ -16,17 +18,15 @@
- Collapse repeated `ffprobe` calls into a single probe result per source file. - Collapse repeated `ffprobe` calls into a single probe result per source file.
- Replace `query.count()` plus `first()` patterns with single-query ORM accessors. - Replace `query.count()` plus `first()` patterns with single-query ORM accessors.
- Cache or precompile filename pattern regexes instead of scanning every pattern for every file. - Cache or precompile filename pattern regexes instead of scanning every pattern for every file.
- Guard logger handler installation to avoid duplicated handlers and noisy repeated setup.
- Highest-leverage repo and workflow optimizations: - Highest-leverage repo and workflow optimizations:
- Stop tracking nested `__pycache__` output and other generated artifacts.
- Consolidate setup and upgrade tooling to reduce overlapping shell-script responsibilities. - Consolidate setup and upgrade tooling to reduce overlapping shell-script responsibilities.
- Trim or reorganize the oversized test/combinator surface so it is easier to run, debug, and extend. - Continue migrating the oversized legacy test/combinator surface into focused modern tests so it is easier to run, debug, and extend.
## Optimization Candidates ## Optimization Candidates
1. CLI startup and import cost 1. CLI startup and import cost
- [`src/ffx/ffx.py`](/home/osgw/.local/src/codex/ffx/src/ffx/ffx.py) imports a large portion of the application at module import time, even for cheap commands such as `version`, `help`, `setup_dependencies`, and `upgrade`. - [`src/ffx/cli.py`](/home/osgw/.local/src/codex/ffx/src/ffx/cli.py) imports a large portion of the application at module import time, even for cheap commands such as `version`, `help`, `setup_dependencies`, and `upgrade`.
- Optimization: - Optimization:
- Move heavy imports into the commands that actually need them. - Move heavy imports into the commands that actually need them.
- Keep the CLI root importable with only core stdlib and Click dependencies. - Keep the CLI root importable with only core stdlib and Click dependencies.
@@ -80,28 +80,8 @@
- Better failure diagnosis. - Better failure diagnosis.
- Cleaner process management semantics. - Cleaner process management semantics.
7. Logger handlers can be added repeatedly 7. Tooling overlap and naming drift
- [`src/ffx/ffx.py`](/home/osgw/.local/src/codex/ffx/src/ffx/ffx.py) adds file and console handlers each invocation. - There are still overlapping prep and setup entrypoints across [`tools/prepare.sh`](/home/osgw/.local/src/codex/ffx/tools/prepare.sh), [`tools/setup.sh`](/home/osgw/.local/src/codex/ffx/tools/setup.sh), and newer CLI maintenance commands.
- Several helper classes install `NullHandler` instances ad hoc, for example [`src/ffx/process.py`](/home/osgw/.local/src/codex/ffx/src/ffx/process.py), [`src/ffx/tmdb_controller.py`](/home/osgw/.local/src/codex/ffx/src/ffx/tmdb_controller.py), [`src/ffx/media_descriptor.py`](/home/osgw/.local/src/codex/ffx/src/ffx/media_descriptor.py), and [`src/ffx/helper.py`](/home/osgw/.local/src/codex/ffx/src/ffx/helper.py).
- Optimization:
- Guard handler installation so each logger is configured once.
- Prefer module-level logger setup patterns over per-instance handler mutation.
- Expected value:
- Less duplicate logging.
- Lower confusion in long-running or repeatedly invoked contexts.
8. Repo-local hygiene for generated Python artifacts
- The repo currently contains nested compiled artifacts under `src/ffx/__pycache__/...`.
- `.gitignore` only ignores `__pycache__` at the repo root, not recursive `__pycache__/`.
- Optimization:
- Ignore `__pycache__/` recursively and clean tracked generated files.
- Consider ignoring local virtualenv or other generated tool directories if they may appear in-repo later.
- Expected value:
- Cleaner diffs and scans.
- Lower repo noise.
9. Tooling overlap and naming drift
- There are now multiple prep-related scripts: [`tools/prepare.sh`](/home/osgw/.local/src/codex/ffx/tools/prepare.sh), [`tools/setup.sh`](/home/osgw/.local/src/codex/ffx/tools/setup.sh), and the legacy-like [`tools/ffx_update.sh`](/home/osgw/.local/src/codex/ffx/tools/ffx_update.sh).
- Optimization: - Optimization:
- Decide which scripts remain canonical. - Decide which scripts remain canonical.
- Replace or remove legacy wrappers once equivalent CLI commands exist. - Replace or remove legacy wrappers once equivalent CLI commands exist.
@@ -110,7 +90,7 @@
- Less operator confusion. - Less operator confusion.
- Fewer duplicated procedures to maintain. - Fewer duplicated procedures to maintain.
10. Placeholder UI surfaces should either ship or disappear 8. Placeholder UI surfaces should either ship or disappear
- [`src/ffx/help_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/help_screen.py) and [`src/ffx/settings_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/settings_screen.py) are placeholders. - [`src/ffx/help_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/help_screen.py) and [`src/ffx/settings_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/settings_screen.py) are placeholders.
- Optimization: - Optimization:
- Either remove them from the active UI surface or complete them. - Either remove them from the active UI surface or complete them.
@@ -119,7 +99,7 @@
- Leaner interface. - Leaner interface.
- Lower UX ambiguity. - Lower UX ambiguity.
11. Large Textual screens repeat configuration and controller loading 9. Large Textual screens repeat configuration and controller loading
- Screens such as [`src/ffx/media_details_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/media_details_screen.py), [`src/ffx/pattern_details_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/pattern_details_screen.py), and [`src/ffx/show_details_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/show_details_screen.py) repeat setup patterns and local metadata filtering extraction. - Screens such as [`src/ffx/media_details_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/media_details_screen.py), [`src/ffx/pattern_details_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/pattern_details_screen.py), and [`src/ffx/show_details_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/show_details_screen.py) repeat setup patterns and local metadata filtering extraction.
- Optimization: - Optimization:
- Extract a shared screen base or helper for common config/controller/bootstrap logic. - Extract a shared screen base or helper for common config/controller/bootstrap logic.
@@ -128,7 +108,7 @@
- Lower maintenance overhead. - Lower maintenance overhead.
- Easier UI iteration. - Easier UI iteration.
12. Several helper functions are unfinished or dead-weight 10. Several helper functions are unfinished or dead-weight
- [`src/ffx/helper.py`](/home/osgw/.local/src/codex/ffx/src/ffx/helper.py) contains `permutateList(...): pass`. - [`src/ffx/helper.py`](/home/osgw/.local/src/codex/ffx/src/ffx/helper.py) contains `permutateList(...): pass`.
- There are many combinator and conversion placeholders across tests and migrations. - There are many combinator and conversion placeholders across tests and migrations.
- Optimization: - Optimization:
@@ -138,17 +118,18 @@
- Smaller mental model. - Smaller mental model.
- Less time spent re-evaluating inactive paths. - Less time spent re-evaluating inactive paths.
13. Test suite shape is expensive to understand and likely expensive to run 11. Test suite shape is expensive to understand and likely expensive to run
- The project has a large matrix of combinator files under [`src/ffx/test`](/home/osgw/.local/src/codex/ffx/src/ffx/test), several placeholder `pass` implementations, and at least one suspicious filename with an embedded space: [`src/ffx/test/disposition_combinator_2_3 .py`](/home/osgw/.local/src/codex/ffx/src/ffx/test/disposition_combinator_2_3 .py). - The project still carries a large legacy matrix of combinator files under [`tests/legacy`](/home/osgw/.local/src/codex/ffx/tests/legacy), several placeholder `pass` implementations, and at least one suspicious filename with an embedded space: [`tests/legacy/disposition_combinator_2_3 .py`](/home/osgw/.local/src/codex/ffx/tests/legacy/disposition_combinator_2_3 .py).
- A first focused replacement slice now exists in [`tests/integration/subtrack_mapping/test_cli_bundle.py`](/home/osgw/.local/src/codex/ffx/tests/integration/subtrack_mapping/test_cli_bundle.py), so the remaining work is migration and consolidation rather than creating the modern test shape from scratch.
- Optimization: - Optimization:
- Consolidate combinator families. - Continue replacing broad combinator matrices with focused parametrized integration and unit tests.
- Add a lighter smoke-test path. - Retire the bespoke legacy discovery and runner path once equivalent coverage exists.
- Normalize file naming and test discovery conventions. - Normalize file naming and test discovery conventions.
- Expected value: - Expected value:
- Faster contributor onboarding. - Faster contributor onboarding.
- Easier CI adoption later. - Easier CI adoption later.
14. Process resource limiting semantics could be clearer 12. Process resource limiting semantics could be clearer
- [`src/ffx/process.py`](/home/osgw/.local/src/codex/ffx/src/ffx/process.py) prepends `nice` and `cpulimit` directly when values are set. - [`src/ffx/process.py`](/home/osgw/.local/src/codex/ffx/src/ffx/process.py) prepends `nice` and `cpulimit` directly when values are set.
- Optimization: - Optimization:
- Validate and document effective behavior for combined `nice` + `cpulimit`. - Validate and document effective behavior for combined `nice` + `cpulimit`.
@@ -157,7 +138,7 @@
- Fewer surprises in production-like runs. - Fewer surprises in production-like runs.
- Easier support for user-reported performance behavior. - Easier support for user-reported performance behavior.
15. Import-time dependency coupling makes maintenance commands brittle 13. Import-time dependency coupling makes maintenance commands brittle
- Even after recent CLI maintenance additions, the top-level CLI module still imports most application modules before Click dispatch. - Even after recent CLI maintenance additions, the top-level CLI module still imports most application modules before Click dispatch.
- Optimization: - Optimization:
- Push imports for ORM, Textual, TMDB, ffmpeg helpers, and descriptors behind the commands that actually need them. - Push imports for ORM, Textual, TMDB, ffmpeg helpers, and descriptors behind the commands that actually need them.
@@ -165,7 +146,7 @@
- Maintenance commands such as setup and upgrade stay usable when optional runtime dependencies are broken. - Maintenance commands such as setup and upgrade stay usable when optional runtime dependencies are broken.
- Better separation between media runtime code and maintenance tooling. - Better separation between media runtime code and maintenance tooling.
16. Regex and string utility cleanup 14. Regex and string utility cleanup
- [`src/ffx/helper.py`](/home/osgw/.local/src/codex/ffx/src/ffx/helper.py) still emits a `SyntaxWarning` for `RICH_COLOR_PATTERN`. - [`src/ffx/helper.py`](/home/osgw/.local/src/codex/ffx/src/ffx/helper.py) still emits a `SyntaxWarning` for `RICH_COLOR_PATTERN`.
- Optimization: - Optimization:
- Convert regex literals to raw strings where appropriate. - Convert regex literals to raw strings where appropriate.
@@ -174,7 +155,7 @@
- Cleaner runtime output. - Cleaner runtime output.
- Less warning noise during dry-run maintenance commands. - Less warning noise during dry-run maintenance commands.
17. Database startup always runs schema creation and version checks 15. Database startup always runs schema creation and version checks
- [`src/ffx/database.py`](/home/osgw/.local/src/codex/ffx/src/ffx/database.py) runs `Base.metadata.create_all(...)` and version checks every time a DB-backed context is created. - [`src/ffx/database.py`](/home/osgw/.local/src/codex/ffx/src/ffx/database.py) runs `Base.metadata.create_all(...)` and version checks every time a DB-backed context is created.
- Optimization: - Optimization:
- Measure startup cost and consider separating bootstrapping from ordinary command execution. - Measure startup cost and consider separating bootstrapping from ordinary command execution.
@@ -198,7 +179,6 @@
1. Triage the list into quick wins, medium refactors, and long-horizon cleanup. 1. Triage the list into quick wins, medium refactors, and long-horizon cleanup.
2. Tackle the cheapest high-impact items first: 2. Tackle the cheapest high-impact items first:
- recursive `__pycache__/` ignore and cleanup,
- regex raw-string warning cleanup, - regex raw-string warning cleanup,
- `count()` plus `first()` query cleanup, - `count()` plus `first()` query cleanup,
- single-call `ffprobe` refactor. - single-call `ffprobe` refactor.

View File

@@ -27,6 +27,11 @@ Homepage = "https://gitea.maveno.de/Javanaut/ffx"
Repository = "https://gitea.maveno.de/Javanaut/ffx.git" Repository = "https://gitea.maveno.de/Javanaut/ffx.git"
Issues = "https://gitea.maveno.de/Javanaut/ffx/issues" Issues = "https://gitea.maveno.de/Javanaut/ffx/issues"
[project.optional-dependencies]
test = [
"pytest",
]
[build-system] [build-system]
requires = [ requires = [
"setuptools", "setuptools",
@@ -35,4 +40,13 @@ requires = [
build-backend = "setuptools.build_meta" build-backend = "setuptools.build_meta"
[project.scripts] [project.scripts]
ffx = "ffx.ffx:ffx" ffx = "ffx.cli:ffx"
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
addopts = "-ra"
markers = [
"integration: exercises the FFX bundle with real ffmpeg/ffprobe processes",
"subtrack_mapping: covers requirements/subtrack_mapping.md",
]

View File

@@ -32,7 +32,7 @@
## High-Level Building Blocks ## High-Level Building Blocks
- Frontend, CLI, API, or worker: - Frontend, CLI, API, or worker:
- A Click-based CLI in [`src/ffx/ffx.py`](/home/osgw/.local/src/codex/ffx/src/ffx/ffx.py). - A Click-based CLI in [`src/ffx/cli.py`](/home/osgw/.local/src/codex/ffx/src/ffx/cli.py), exposed as the `ffx` command and via `python -m ffx`.
- A Textual terminal UI rooted in [`src/ffx/ffx_app.py`](/home/osgw/.local/src/codex/ffx/src/ffx/ffx_app.py) with screens for shows, patterns, file inspection, tracks, tags, and shifted seasons. - A Textual terminal UI rooted in [`src/ffx/ffx_app.py`](/home/osgw/.local/src/codex/ffx/src/ffx/ffx_app.py) with screens for shows, patterns, file inspection, tracks, tags, and shifted seasons.
- Core business logic: - Core business logic:
- Descriptor objects model media files, shows, and tracks. - Descriptor objects model media files, shows, and tracks.
@@ -50,7 +50,7 @@
- Key entities or records: - Key entities or records:
- `Show`: canonical TV show metadata plus digit-formatting rules for generated filenames. - `Show`: canonical TV show metadata plus digit-formatting rules for generated filenames.
- `Pattern`: regex rule tying filenames to one show and one target media schema. - `Pattern`: regex rule tying filenames to one show and one target media schema.
- `Track` and `TrackTag`: persisted target stream layout, codec, dispositions, audio layout, and stream-level tags. - `Track` and `TrackTag`: persisted target stream records, codec, dispositions, audio layout, and stream-level tags. Detailed source-to-target mapping rules live in `requirements/subtrack_mapping.md`.
- `MediaTag`: persisted container-level metadata for a pattern. - `MediaTag`: persisted container-level metadata for a pattern.
- `ShiftedSeason`: mapping from source numbering ranges to adjusted season and episode numbers. - `ShiftedSeason`: mapping from source numbering ranges to adjusted season and episode numbers.
- `Property`: internal key-value storage currently used for database versioning. - `Property`: internal key-value storage currently used for database versioning.
@@ -63,7 +63,6 @@
- Only supported media-file extensions are accepted for conversion. - Only supported media-file extensions are accepted for conversion.
- Stored database version must match the runtime-required version. - Stored database version must match the runtime-required version.
- A normalized descriptor may have at most one default and one forced stream per relevant track type. - A normalized descriptor may have at most one default and one forced stream per relevant track type.
- Stored target tracks must refer to valid source tracks of matching types.
- Shifted-season ranges are intended not to overlap for the same show and season. - Shifted-season ranges are intended not to overlap for the same show and season.
- TMDB lookups require a show ID and season and episode numbers. - TMDB lookups require a show ID and season and episode numbers.
- Error-handling approach: - Error-handling approach:

View File

@@ -0,0 +1,74 @@
# Subtrack Mapping
This file defines the behavioral contract for mapping input subtracks to output
subtracks during conversion.
Primary source: actual tool code in `src/ffx/`.
Secondary source: `tests/legacy/`, used only to clarify intent and reveal gaps.
## Scope
- Ensuring each target subtrack is created from the corresponding source-subtrack information, including stream-level metadata.
- Mapping input streams to output streams during conversion.
- Using persisted pattern-track definitions from the database as the target schema.
- Allowing omission and reordering of retained tracks.
- Keeping stream-level metadata attached to the correct source-derived logical track after remapping.
- Normalizing target output into ordered track groups: video, audio, subtitle, then special types such as fonts or images.
## Terms
- `source_index`: identity of the originating input stream from ffprobe or an imported source descriptor.
- `index`: final output-track order across all retained tracks.
- `sub_index`: per-type position within the retained tracks of one type, for example audio stream `0` or subtitle stream `1`.
- `target schema`: stored or constructed output-track definition that decides which tracks are kept, omitted, reordered, and rewritten.
- `separate source file`: additional file bound to one target track slot whose media payload replaces the regular source payload for that slot.
## Rules
- `SUBTRACK_MAPPING-0001`: The system shall represent source-stream identity separately from output order. `source_index`, `index`, and `sub_index` are distinct concepts and shall not be collapsed into one field.
- `SUBTRACK_MAPPING-0002`: The system shall derive `source_index` for probed tracks from the original ffprobe stream index and preserve that identity through conversion planning.
- `SUBTRACK_MAPPING-0003`: Pattern-backed track definitions stored in the database shall persist both target output order and originating source-stream identity.
- `SUBTRACK_MAPPING-0004`: When a filename matches a pattern, the pattern target schema shall be the source of truth for which source tracks are retained, which are omitted, and in what order retained tracks appear in the output.
- `SUBTRACK_MAPPING-0005`: A target track may refer only to an existing source track of the same type. Conversion shall fail fast when a target track refers to a nonexistent source stream or a source stream of a different type.
- `SUBTRACK_MAPPING-0006`: The ffmpeg mapping phase shall be generated from target output order while resolving each retained output track back to its originating source stream via `source_index`.
- `SUBTRACK_MAPPING-0007`: Reordering and omission shall preserve logical track identity. Stream-level metadata, titles, languages, and disposition decisions shall stay attached to the correct source-derived logical track after mapping.
- `SUBTRACK_MAPPING-0008`: The system shall support one-off CLI stream-order overrides without requiring prior database edits.
- `SUBTRACK_MAPPING-0009`: Operator-facing inspection and editing surfaces shall expose enough source-versus-target information to let a user reason about subtrack mapping decisions.
- `SUBTRACK_MAPPING-0010`: Test coverage for subtrack mapping shall assert source-derived identity, omission, and output order explicitly. Final track counts or final type sequences alone are insufficient proof of correct mapping.
- `SUBTRACK_MAPPING-0011`: Retained target tracks shall appear in ordered groups: video track or tracks first, then audio tracks, then subtitle tracks, then special types such as fonts or images. Within each group, the target schema shall define the order.
- `SUBTRACK_MAPPING-0012`: Track omission is valid when required by output compatibility, when needed to normalize source tracks into the required target group order and schema, or when explicitly requested by database rules or CLI options.
- `SUBTRACK_MAPPING-0013`: If source tracks do not already comply with the required target group order, conversion shall reorder retained tracks to match the target ordering contract without losing source-track identity or stream-level metadata lineage.
## Separate Additional Source Files
- `SUBTRACK_MAPPING-0014`: A separate source file may substitute the media payload of one target subtrack without changing that target track's intended output position.
- `SUBTRACK_MAPPING-0015`: When a separate source file is used, the target track shall remain bound to the corresponding logical source track for mapping, validation, and metadata lineage.
- `SUBTRACK_MAPPING-0016`: Metadata for a substituted target track shall be merged from the regular source track and the separate source file when available.
- `SUBTRACK_MAPPING-0017`: If the separate source file provides a metadata field that is also present on the regular source track, the separate source file value shall win in the target output.
- `SUBTRACK_MAPPING-0018`: If a metadata field is absent from the separate source file, the system shall fall back to the corresponding metadata from the regular source track or target schema rewrite rules.
## Acceptance
- Given a source media descriptor and a pattern-backed target schema, the planned output tracks can be listed in final output order and each retained track can still be traced to one originating source stream.
- Planned output order follows grouped target order: video, audio, subtitle, then special types.
- Tracks not referenced by the target schema are omitted from output mapping.
- Tracks may also be omitted when they are incompatible with the chosen output format or explicitly excluded by database or CLI rules.
- Two retained target tracks never originate from the same source stream unless duplication is implemented explicitly as a separate feature.
- If target-track metadata is rewritten after reordering, it is written onto the correct source-derived logical track rather than the track that merely occupies the same final output position.
- Invalid target-to-source references fail deterministically before the conversion job is launched.
- If a separate source file substitutes one target track, that track keeps its target slot and ordering while metadata is merged with separate-file values taking precedence when both sides provide the same field.
- A test proving subtrack mapping must assert at least one of: exact `source_index` to output-order mapping, omission of named source tracks, or preservation of per-track metadata after reorder.
## Test Notes
- `tests/legacy/scenario.py` names pattern behavior as `Filter/Reorder Tracks`.
- `tests/legacy/scenario_4.py` is the strongest end-to-end signal because it runs DB-backed conversion and reapplies source indices before assertion.
- `tests/legacy/track_tag_combinator_2_0.py` and `tests/legacy/track_tag_combinator_3_4.py` sort result tracks by `source_index` before checking tags, which matches the intended identity model.
- Legacy permutation combinators define permutations but their assertion functions are stubs.
- Some legacy scenarios produce `AP` and `SP` selectors but do not execute them.
## Risks
- `src/ffx/media_descriptor.py` contains an explicit `rearrangeTrackDescriptors()` path whose current implementation appears defective and under-tested.
- Separate-source-file metadata precedence is only partly expressed in current implementation paths and should be covered directly in the rewritten test suite.
- Production code expresses the mapping contract more clearly than the legacy harness, so a rewrite should add direct logic-level tests for mapping and reorder planning.

130
requirements/tests.md Normal file
View File

@@ -0,0 +1,130 @@
# Test Rewrite
This file captures the structure executed by `tests/legacy_runner.py` today and
defines the target shape for a complete rewrite.
Detailed product rules for source-to-target subtrack mapping live in
`requirements/subtrack_mapping.md`. This file describes only how tests cover
that area.
## Current Harness
- Entrypoint: `python tests/legacy_runner.py run`
- Runner style: custom Click CLI, not `pytest` or `unittest`
- Commands:
- `run`: discover scenario files, instantiate each scenario, run yielded jobs
- `dupe`: helper command that creates duplicate media fixtures; not part of the test run
- Filters: `--scenario`, `--variant`, `--limit`
- Shared context:
- builds one mutable dict for the whole run
- installs loggers and writes `ffx_test_report.log`
- creates `ConfigurationController` eagerly
- tracks only passed and failed counters
- Discovery:
- scenario files: `tests/legacy/scenario_*.py`
- combinators: `glob + importlib + inspect` by filename convention
- ordering: implicit glob order, no explicit sorting
- Skip behavior:
- Scenario 4 is skipped when `TMDB_API_KEY` is missing
- only `TMDB_API_KEY_NOT_PRESENT_EXCEPTION` is caught at scenario construction time
## Current Scenarios
- `1`: `tests/legacy/scenario_1.py`
- focus: basename generation without pattern lookup or TMDB
- inputs per job: `1`
- jobs: `140`
- expected failures: `0`
- execution: build one synthetic source file, run `python -m ffx convert`, assert filename selectors only
- selectors executed: `B`, `L`, `I`
- selectors defined but not executed: `S`, `R`
- `2`: `tests/legacy/scenario_2.py`
- focus: conversion matrix over media layouts, dispositions, tags, and permutations
- inputs per job: `1`
- jobs: `8193`
- expected failures: `3267`
- execution: build one synthetic source file, run `python -m ffx convert`, probe result with `FileProperties`, assert track layout and selected audio and subtitle metadata
- selectors executed: `M`, `AD`, `AT`, `SD`, `ST`
- selectors defined but not executed: `MT`, `AP`, `SP`, `J`
- `4`: `tests/legacy/scenario_4.py`
- focus: pattern-driven batch conversion with SQLite state and live TMDB naming
- inputs per job: `6`
- jobs: `768`
- expected failures: `336`
- execution: build six synthetic preset files, recreate temp SQLite DB, insert show and pattern, run one batch convert command, query TMDB during assertions
- selectors executed: `M`, `AD`, `AT`, `SD`, `ST`
- selectors defined but not executed: `MT`, `AP`, `SP`, `J`
- notes:
- uses `MediaCombinator6` only
- issues live HTTP requests through `TmdbController` with no request cache
## Current Combinator Families
- scenario files discovered: `3`
- basename combinators discovered: `2`
- media combinators discovered: `8`
- media tag combinators discovered: `3`
- disposition combinator 2 variants: `4`
- disposition combinator 3 variants: `5`
- track tag combinator 2 variants: `4`
- track tag combinator 3 variants: `5`
- indicator variants: `7`
- label variants: `2`
- show variants: `3`
- release variants: `3`
- permutation 2 variants: `2`
- permutation 3 variants: `3`
## Current Totals
- full run without TMDB: `8333`
- full run with TMDB: `9101`
- Scenario 4 generated source files: `4608`
- Scenario 4 live TMDB episode queries: `4608`
## Current Behavior Areas
- output basename rules for label, season and episode indicator, show name, and release suffix combinations
- track layout normalization across the eight media combinator shapes from `VA` through `VAASSS`
- two-track and three-track disposition edge cases, including intentional failure cases
- two-track and three-track track-tag preservation checks, including checks that sort results by source identity
- container-level media tag handling
- pattern-backed conversion against a temporary SQLite database
- TMDB-assisted episode naming for batch conversion
## Structural Findings
- The suite is process-heavy: most jobs run `ffmpeg` to generate a fixture and then spawn the FFX CLI as a subprocess.
- The suite is integration-first and has almost no isolated unit-level coverage for pure logic.
- The base `Combinator` class is a placeholder and is not the real abstraction boundary used by the suite.
- Many combinator methods are placeholders: there are `25` `pass` statements across the current test modules.
- Several assertion families are never executed because scenario selector dispatch is incomplete.
- Scenario comments mention a Scenario 3, but no `scenario_3.py` exists.
- `tests/legacy/_basename_combinator_1.py` is effectively orphaned because discovery only matches `basename_combinator_*.py`.
- `tests/legacy/disposition_combinator_2_3 .py` contains an embedded space in the filename and is still part of discovery.
- Expected failures are validated only as subprocess return-code matches, not as specific error types or messages.
- The current suite depends on `ffmpeg`, `ffprobe`, SQLite, the local Python environment, and for Scenario 4 a live TMDB API key plus network access.
## Rewrite Target
- Replace the custom Click harness with a standard test runner, preferably `pytest`.
- Split the suite into explicit layers: unit, integration, and optional external-system tests.
- Keep unit tests as the default path and make them runnable without `ffmpeg`, `ffprobe`, TMDB, or a user config directory.
- Model discovery explicitly in code instead of relying on glob-plus-reflection naming conventions.
- Convert the current Cartesian-product combinators into readable parametrized cases grouped by behavior area.
- Preserve the current behavior areas, but represent them with targeted cases instead of thousands of opaque variant IDs.
- Make every assertion family explicit and executable; there must be no selector that is produced but never consumed.
- Replace live TMDB access with fixtures or mocks in normal runs; any live-contract test must be opt-in.
- Replace ad hoc subprocess return-code checks with assertions on typed exceptions, stderr content, or structured outputs.
- Provide small reusable media fixtures or fixture builders so only a narrow integration slice needs `ffmpeg`-generated media.
- Make database tests self-contained and fast through temporary databases and direct controller-level assertions.
- Make ordering, naming, and selection deterministic so a contributor can predict exactly what will run.
- Expose a small smoke suite for quick local runs and CI, plus a separately marked slower integration suite.
- Prefer domain-oriented test modules over combinator-family modules: basename, pattern matching, metadata rewrite, track ordering, TMDB naming, CLI smoke, and failure handling.
## Rewrite Acceptance
- A default local test run finishes quickly and without network access.
- A contributor can identify which behavior a failing test covers without decoding variant strings like `VAASSS-A:D10-S:T001`.
- All current intended failure behaviors remain covered, but each one is asserted directly and readably.
- The rewritten suite can be adopted by CI without requiring live TMDB credentials.

9
src/ffx/__main__.py Normal file
View File

@@ -0,0 +1,9 @@
from .cli import ffx
def main():
ffx()
if __name__ == "__main__":
main()

View File

@@ -1,6 +1,14 @@
#! /usr/bin/python3 #! /usr/bin/python3
import os, click, time, logging, shutil, subprocess import os, sys, click, time, shutil, subprocess
# Allow direct execution via `python src/ffx/cli.py` by preferring the package
# root on sys.path.
if __package__ in (None, ''):
script_dir = os.path.dirname(__file__)
package_root = os.path.dirname(os.path.dirname(__file__))
sys.path = [p for p in sys.path if os.path.abspath(p) != os.path.abspath(script_dir)]
sys.path.insert(0, package_root)
from ffx.configuration_controller import ConfigurationController from ffx.configuration_controller import ConfigurationController
@@ -37,6 +45,7 @@ from ffx.filter.deinterlace_filter import DeinterlaceFilter
from ffx.constants import VERSION from ffx.constants import VERSION
from ffx.shifted_season_controller import ShiftedSeasonController from ffx.shifted_season_controller import ShiftedSeasonController
from ffx.logging_utils import configure_ffx_logger
@click.group() @click.group()
@@ -70,23 +79,11 @@ def ffx(ctx, database_file, verbose, dry_run):
fileLogVerbosity = max(40 - verbose * 10, 10) fileLogVerbosity = max(40 - verbose * 10, 10)
consoleLogVerbosity = max(20 - verbose * 10, 10) consoleLogVerbosity = max(20 - verbose * 10, 10)
ctx.obj['logger'] = logging.getLogger('FFX') ctx.obj['logger'] = configure_ffx_logger(
ctx.obj['logger'].setLevel(logging.DEBUG) ctx.obj['config'].getLogFilePath(),
fileLogVerbosity,
ffxFileHandler = logging.FileHandler(ctx.obj['config'].getLogFilePath()) consoleLogVerbosity,
ffxFileHandler.setLevel(fileLogVerbosity) )
ffxConsoleHandler = logging.StreamHandler()
ffxConsoleHandler.setLevel(consoleLogVerbosity)
fileFormatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ffxFileHandler.setFormatter(fileFormatter)
consoleFormatter = logging.Formatter(
'%(message)s')
ffxConsoleHandler.setFormatter(consoleFormatter)
ctx.obj['logger'].addHandler(ffxConsoleHandler)
ctx.obj['logger'].addHandler(ffxFileHandler)
# Define a subcommand # Define a subcommand
@@ -392,7 +389,7 @@ def checkUniqueDispositions(context, mediaDescriptor: MediaDescriptor):
@click.option('-l', '--label', type=str, default='', help='Label to be used as filename prefix') @click.option('-l', '--label', type=str, default='', help='Label to be used as filename prefix')
@click.option('-v', '--video-encoder', type=str, default=FfxController.DEFAULT_VIDEO_ENCODER, help=f"Target video encoder (vp9, av1 or h264)", show_default=True) @click.option('-v', '--video-encoder', type=str, default=FfxController.DEFAULT_VIDEO_ENCODER, help=f"Target video encoder (vp9, av1, h264 or copy)", show_default=True)
@click.option('-q', '--quality', type=str, default="", help=f"Quality settings to be used with VP9/H264 encoder") @click.option('-q', '--quality', type=str, default="", help=f"Quality settings to be used with VP9/H264 encoder")
@click.option('-p', '--preset', type=str, default="", help=f"Quality preset to be used with AV1 encoder") @click.option('-p', '--preset', type=str, default="", help=f"Quality preset to be used with AV1 encoder")
@@ -516,9 +513,13 @@ def convert(ctx,
context['video_encoder'] = VideoEncoder.fromLabel(video_encoder) context['video_encoder'] = VideoEncoder.fromLabel(video_encoder)
#HINT: quick and dirty override for h264, todo improve # HINT: quick and dirty override for h264, todo improve
targetFormat = '' if context['video_encoder'] == VideoEncoder.H264 else FfxController.DEFAULT_FILE_FORMAT if context['video_encoder'] in (VideoEncoder.H264, VideoEncoder.COPY):
targetExtension = 'mkv' if context['video_encoder'] == VideoEncoder.H264 else FfxController.DEFAULT_FILE_EXTENSION targetFormat = ''
targetExtension = 'mkv'
else:
targetFormat = FfxController.DEFAULT_FILE_FORMAT
targetExtension = FfxController.DEFAULT_FILE_EXTENSION
context['use_tmdb'] = not no_tmdb context['use_tmdb'] = not no_tmdb
context['use_pattern'] = not no_pattern context['use_pattern'] = not no_pattern

View File

@@ -3,6 +3,9 @@ import os, click
from sqlalchemy import create_engine from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker from sqlalchemy.orm import sessionmaker
# Import the full model package so SQLAlchemy registers every mapped class
# before metadata creation and the first ORM query.
import ffx.model
from ffx.model.show import Base from ffx.model.show import Base
from ffx.model.property import Property from ffx.model.property import Property

View File

@@ -100,6 +100,37 @@ class FfxController():
return [f"-c:v:{int(subIndex)}", return [f"-c:v:{int(subIndex)}",
'copy'] 'copy']
def generateAudioCopyTokens(self, subIndex):
return [f"-c:a:{int(subIndex)}", 'copy']
def generateSubtitleCopyTokens(self, subIndex):
return [f"-c:s:{int(subIndex)}", 'copy']
def generateAttachmentCopyTokens(self, subIndex):
return [f"-c:t:{int(subIndex)}", 'copy']
def generateCopyTokens(self):
copyTokens = []
for trackDescriptor in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
copyTokens += self.generateVideoCopyTokens(trackDescriptor.getSubIndex())
for trackDescriptor in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.AUDIO):
copyTokens += self.generateAudioCopyTokens(trackDescriptor.getSubIndex())
for trackDescriptor in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.SUBTITLE):
copyTokens += self.generateSubtitleCopyTokens(trackDescriptor.getSubIndex())
attachmentDescriptors = (
self.__sourceMediaDescriptor.getTrackDescriptors(trackType=TrackType.ATTACHMENT)
if self.__sourceMediaDescriptor is not None
else self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.ATTACHMENT)
)
for trackDescriptor in attachmentDescriptors:
copyTokens += self.generateAttachmentCopyTokens(trackDescriptor.getSubIndex())
return copyTokens
def generateCropTokens(self): def generateCropTokens(self):
@@ -204,7 +235,7 @@ class FfxController():
if qualityFilters and (quality := qualityFilters[0]['parameters']['quality']): if qualityFilters and (quality := qualityFilters[0]['parameters']['quality']):
self.__logger.info(f"Setting quality {quality} from command line parameter") self.__logger.info(f"Setting quality {quality} from command line parameter")
elif (quality := currentPattern.quality): elif currentPattern is not None and (quality := currentPattern.quality):
self.__logger.info(f"Setting quality {quality} from pattern default") self.__logger.info(f"Setting quality {quality} from pattern default")
else: else:
quality = (QualityFilter.DEFAULT_H264_QUALITY quality = (QualityFilter.DEFAULT_H264_QUALITY
@@ -238,6 +269,30 @@ class FfxController():
commandTokens = FfxController.COMMAND_TOKENS + ['-i', sourcePath] commandTokens = FfxController.COMMAND_TOKENS + ['-i', sourcePath]
if videoEncoder == VideoEncoder.COPY:
commandSequence = (commandTokens
+ self.__targetMediaDescriptor.getImportFileTokens()
+ self.__targetMediaDescriptor.getInputMappingTokens(sourceMediaDescriptor = self.__sourceMediaDescriptor)
+ self.__mdcs.generateDispositionTokens())
commandSequence += self.__mdcs.generateMetadataTokens()
commandSequence += self.generateCopyTokens()
if self.__context['perform_cut']:
commandSequence += self.generateCropTokens()
commandSequence += self.generateOutputTokens(targetPath,
targetFormat)
self.__logger.debug("FfxController.runJob(): Running command sequence")
if not self.__context['dry_run']:
out, err, rc = executeProcess(commandSequence, context=self.__context)
if rc:
raise click.ClickException(f"Command resulted in error: rc={rc} error={err}")
return
if videoEncoder == VideoEncoder.AV1: if videoEncoder == VideoEncoder.AV1:
commandSequence = (commandTokens commandSequence = (commandTokens

View File

@@ -1,8 +1,9 @@
import re, logging import re
from jinja2 import Environment, Undefined from jinja2 import Environment, Undefined
from .constants import DEFAULT_OUTPUT_FILENAME_TEMPLATE from .constants import DEFAULT_OUTPUT_FILENAME_TEMPLATE
from .configuration_controller import ConfigurationController from .configuration_controller import ConfigurationController
from .logging_utils import get_ffx_logger
class EmptyStringUndefined(Undefined): class EmptyStringUndefined(Undefined):
@@ -192,8 +193,7 @@ def getEpisodeFileBasename(showName,
if context is not None and 'logger' in context.keys(): if context is not None and 'logger' in context.keys():
logger = context['logger'] logger = context['logger']
else: else:
logger = logging.getLogger('FFX') logger = get_ffx_logger()
logger.addHandler(logging.NullHandler())
indexSeparator = ' ' if indexSeasonDigits or indexEpisodeDigits else '' indexSeparator = ' ' if indexSeasonDigits or indexEpisodeDigits else ''
@@ -236,4 +236,3 @@ def removeRichColor(text: str):
return text return text
else: else:
return str(richColorMatch.group(1)) return str(richColorMatch.group(1))

68
src/ffx/logging_utils.py Normal file
View File

@@ -0,0 +1,68 @@
import logging
import os
FFX_LOGGER_NAME = "FFX"
CONSOLE_HANDLER_NAME = "ffx-console"
FILE_HANDLER_NAME = "ffx-file"
def get_ffx_logger(name: str = FFX_LOGGER_NAME) -> logging.Logger:
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
if not logger.handlers:
logger.addHandler(logging.NullHandler())
return logger
def configure_ffx_logger(
log_file_path: str,
file_level: int,
console_level: int,
name: str = FFX_LOGGER_NAME,
) -> logging.Logger:
logger = get_ffx_logger(name)
logger.propagate = False
for handler in list(logger.handlers):
if isinstance(handler, logging.NullHandler):
logger.removeHandler(handler)
console_handler = next(
(handler for handler in logger.handlers if handler.get_name() == CONSOLE_HANDLER_NAME),
None,
)
if console_handler is None:
console_handler = logging.StreamHandler()
console_handler.set_name(CONSOLE_HANDLER_NAME)
logger.addHandler(console_handler)
console_handler.setLevel(console_level)
console_handler.setFormatter(logging.Formatter("%(message)s"))
normalized_log_path = os.path.abspath(log_file_path)
file_handler = next(
(handler for handler in logger.handlers if handler.get_name() == FILE_HANDLER_NAME),
None,
)
if (
file_handler is not None
and os.path.abspath(file_handler.baseFilename) != normalized_log_path
):
logger.removeHandler(file_handler)
file_handler.close()
file_handler = None
if file_handler is None:
file_handler = logging.FileHandler(normalized_log_path)
file_handler.set_name(FILE_HANDLER_NAME)
logger.addHandler(file_handler)
file_handler.setLevel(file_level)
file_handler.setFormatter(
logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
)
return logger

View File

@@ -1,4 +1,4 @@
import os, re, click, logging import os, re, click
from typing import List, Self from typing import List, Self
@@ -9,6 +9,7 @@ from ffx.track_disposition import TrackDisposition
from ffx.track_codec import TrackCodec from ffx.track_codec import TrackCodec
from ffx.track_descriptor import TrackDescriptor from ffx.track_descriptor import TrackDescriptor
from ffx.logging_utils import get_ffx_logger
class MediaDescriptor: class MediaDescriptor:
@@ -46,8 +47,7 @@ class MediaDescriptor:
self.__logger = self.__context['logger'] self.__logger = self.__context['logger']
else: else:
self.__context = {} self.__context = {}
self.__logger = logging.getLogger('FFX') self.__logger = get_ffx_logger()
self.__logger.addHandler(logging.NullHandler())
if MediaDescriptor.TAGS_KEY in kwargs.keys(): if MediaDescriptor.TAGS_KEY in kwargs.keys():
if type(kwargs[MediaDescriptor.TAGS_KEY]) is not dict: if type(kwargs[MediaDescriptor.TAGS_KEY]) is not dict:
@@ -207,7 +207,7 @@ class MediaDescriptor:
def rearrangeTrackDescriptors(self, newOrder: List[int]): def rearrangeTrackDescriptors(self, newOrder: List[int]):
if len(newOrder) != len(self.__trackDescriptors): if len(newOrder) != len(self.__trackDescriptors):
raise ValueError('Length of list with reordered indices does not match number of track descriptors') raise ValueError('Length of list with reordered indices does not match number of track descriptors')
reorderedTrackDescriptors = {} reorderedTrackDescriptors = []
for oldIndex in newOrder: for oldIndex in newOrder:
reorderedTrackDescriptors.append(self.__trackDescriptors[oldIndex]) reorderedTrackDescriptors.append(self.__trackDescriptors[oldIndex])
self.__trackDescriptors = reorderedTrackDescriptors self.__trackDescriptors = reorderedTrackDescriptors
@@ -362,6 +362,14 @@ class MediaDescriptor:
inputMappingTokens = [] inputMappingTokens = []
sortedTrackDescriptors = sorted(self.__trackDescriptors, key=lambda d: d.getIndex()) sortedTrackDescriptors = sorted(self.__trackDescriptors, key=lambda d: d.getIndex())
sourceTrackDescriptorsByIndex = {
td.getIndex(): td
for td in (
sourceMediaDescriptor.getTrackDescriptors()
if sourceMediaDescriptor is not None
else sortedTrackDescriptors
)
}
# raise click.ClickException(' '.join([f"\nindex={td.getIndex()} subIndex={td.getSubIndex()} srcIndex={td.getSourceIndex()} type={td.getType().label()}" for td in self.__trackDescriptors])) # raise click.ClickException(' '.join([f"\nindex={td.getIndex()} subIndex={td.getSubIndex()} srcIndex={td.getSourceIndex()} type={td.getType().label()}" for td in self.__trackDescriptors]))
@@ -373,8 +381,12 @@ class MediaDescriptor:
#HINT: Attached thumbnails are not supported by .webm container format #HINT: Attached thumbnails are not supported by .webm container format
if td.getCodec() != TrackCodec.PNG: if td.getCodec() != TrackCodec.PNG:
stdi = sortedTrackDescriptors[td.getSourceIndex()].getIndex() sourceTrackDescriptor = sourceTrackDescriptorsByIndex.get(td.getSourceIndex())
stdsi = sortedTrackDescriptors[td.getSourceIndex()].getSubIndex() if sourceTrackDescriptor is None:
raise ValueError(f"No source track descriptor found for source index {td.getSourceIndex()}")
stdi = sourceTrackDescriptor.getIndex()
stdsi = sourceTrackDescriptor.getSubIndex()
trackType = td.getType() trackType = td.getType()
trackCodec = td.getCodec() trackCodec = td.getCodec()
@@ -507,7 +519,10 @@ class MediaDescriptor:
d d
for d in availableFileSubtitleDescriptors for d in availableFileSubtitleDescriptors
if ((season == -1 and episode == -1) if ((season == -1 and episode == -1)
or (d["season"] == int(season) and d["episode"] == int(episode))) or (
d.get("season") == int(season)
and d.get("episode") == int(episode)
))
], ],
key=lambda d: d["index"], key=lambda d: d["index"],
) )
@@ -522,10 +537,14 @@ class MediaDescriptor:
if matchingSubtitleTrackDescriptor: if matchingSubtitleTrackDescriptor:
# click.echo(f"Found matching subtitle file {msfd["path"]}\n") # click.echo(f"Found matching subtitle file {msfd["path"]}\n")
self.__logger.debug(f"importSubtitles(): Found matching subtitle file {msfd['path']}") self.__logger.debug(f"importSubtitles(): Found matching subtitle file {msfd['path']}")
matchingSubtitleTrackDescriptor[0].setExternalSourceFilePath(msfd["path"]) matchingTrack = matchingSubtitleTrackDescriptor[0]
matchingTrack.setExternalSourceFilePath(msfd["path"])
# TODO: Check if useful # Prefer metadata coming from the external single-track source when
# matchingSubtitleTrackDescriptor[0].setDispositionSet(msfd["disposition_set"]) # it is provided explicitly by the filename contract.
matchingTrack.getTags()["language"] = msfd["language"]
if msfd["disposition_set"]:
matchingTrack.setDispositionSet(msfd["disposition_set"])
def getConfiguration(self, label: str = ''): def getConfiguration(self, label: str = ''):

View File

@@ -42,6 +42,14 @@ class MediaDescriptorChangeSet():
self.__targetTrackDescriptors = targetMediaDescriptor.getTrackDescriptors() if targetMediaDescriptor is not None else [] self.__targetTrackDescriptors = targetMediaDescriptor.getTrackDescriptors() if targetMediaDescriptor is not None else []
self.__sourceTrackDescriptors = sourceMediaDescriptor.getTrackDescriptors() if sourceMediaDescriptor is not None else [] self.__sourceTrackDescriptors = sourceMediaDescriptor.getTrackDescriptors() if sourceMediaDescriptor is not None else []
self.__targetTrackDescriptorsByIndex = {
trackDescriptor.getIndex(): trackDescriptor
for trackDescriptor in self.__targetTrackDescriptors
}
self.__sourceTrackDescriptorsByIndex = {
trackDescriptor.getIndex(): trackDescriptor
for trackDescriptor in self.__sourceTrackDescriptors
}
targetMediaTags = targetMediaDescriptor.getTags() if targetMediaDescriptor is not None else {} targetMediaTags = targetMediaDescriptor.getTags() if targetMediaDescriptor is not None else {}
sourceMediaTags = sourceMediaDescriptor.getTags() if sourceMediaDescriptor is not None else {} sourceMediaTags = sourceMediaDescriptor.getTags() if sourceMediaDescriptor is not None else {}
@@ -70,51 +78,34 @@ class MediaDescriptorChangeSet():
self.__numSourceTracks = len(self.__sourceTrackDescriptors) self.__numSourceTracks = len(self.__sourceTrackDescriptors)
maxNumOfTracks = max(self.__numSourceTracks, self.__numTargetTracks)
trackCompareResult = {} trackCompareResult = {}
for targetTrackDescriptor in self.__targetTrackDescriptors:
sourceTrackDescriptor = self.__sourceTrackDescriptorsByIndex.get(
targetTrackDescriptor.getSourceIndex()
)
for trackIndex in range(maxNumOfTracks): if sourceTrackDescriptor is None:
correspondingSourceTrackDescriptors = [st for st in self.__sourceTrackDescriptors if st.getIndex() == trackIndex]
correspondingTargetTrackDescriptors = [tt for tt in self.__targetTrackDescriptors if tt.getIndex() == trackIndex]
# Track present in target but not in source
if (not correspondingSourceTrackDescriptors
and correspondingTargetTrackDescriptors):
if DIFF_ADDED_KEY not in trackCompareResult.keys(): if DIFF_ADDED_KEY not in trackCompareResult.keys():
trackCompareResult[DIFF_ADDED_KEY] = {} trackCompareResult[DIFF_ADDED_KEY] = {}
trackCompareResult[DIFF_ADDED_KEY][targetTrackDescriptor.getIndex()] = targetTrackDescriptor
trackCompareResult[DIFF_ADDED_KEY][trackIndex] = correspondingTargetTrackDescriptors[0]
continue continue
# Track present in target but not in source trackDiff = self.compareTracks(targetTrackDescriptor, sourceTrackDescriptor)
if (correspondingSourceTrackDescriptors if trackDiff:
and not correspondingTargetTrackDescriptors): if DIFF_CHANGED_KEY not in trackCompareResult.keys():
trackCompareResult[DIFF_CHANGED_KEY] = {}
trackCompareResult[DIFF_CHANGED_KEY][targetTrackDescriptor.getIndex()] = trackDiff
targetSourceIndices = {
targetTrackDescriptor.getSourceIndex()
for targetTrackDescriptor in self.__targetTrackDescriptors
}
for sourceTrackDescriptor in self.__sourceTrackDescriptors:
if sourceTrackDescriptor.getIndex() not in targetSourceIndices:
if DIFF_REMOVED_KEY not in trackCompareResult.keys(): if DIFF_REMOVED_KEY not in trackCompareResult.keys():
trackCompareResult[DIFF_REMOVED_KEY] = {} trackCompareResult[DIFF_REMOVED_KEY] = {}
trackCompareResult[DIFF_REMOVED_KEY][sourceTrackDescriptor.getIndex()] = sourceTrackDescriptor
trackCompareResult[DIFF_REMOVED_KEY][trackIndex] = correspondingSourceTrackDescriptors[0]
continue
if (correspondingSourceTrackDescriptors
and correspondingTargetTrackDescriptors):
# if correspondingTargetTrackDescriptors[0].getIndex() == 3:
# raise click.ClickException(f"{correspondingSourceTrackDescriptors[0].getDispositionSet()} {correspondingTargetTrackDescriptors[0].getDispositionSet()}")
trackDiff = self.compareTracks(correspondingTargetTrackDescriptors[0],
correspondingSourceTrackDescriptors[0])
if trackDiff:
if DIFF_CHANGED_KEY not in trackCompareResult.keys():
trackCompareResult[DIFF_CHANGED_KEY] = {}
trackCompareResult[DIFF_CHANGED_KEY][trackIndex] = trackDiff
if trackCompareResult: if trackCompareResult:
@@ -274,26 +265,28 @@ class MediaDescriptorChangeSet():
outputTrackTags = addedTrackTags | changedTrackTags outputTrackTags = addedTrackTags | changedTrackTags
trackDescriptor = self.__targetTrackDescriptors[trackIndex] trackDescriptor = self.__targetTrackDescriptorsByIndex[trackIndex]
for tagKey, tagValue in outputTrackTags.items(): for tagKey, tagValue in outputTrackTags.items():
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}" metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
+ f":{trackDescriptor.getSubIndex()}", + f":{trackDescriptor.getSubIndex()}",
f"{tagKey}={tagValue}"] f"{tagKey}={tagValue}"]
for removeKey in removedTrackTags.keys():
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
+ f":{trackDescriptor.getSubIndex()}",
f"{removeKey}="]
#HINT: In case of loading a track from an external file
# no tags from source are present for the track so
# the unchanged tracks are passed to the output file as well
if trackDescriptor.getExternalSourceFilePath(): if trackDescriptor.getExternalSourceFilePath():
for tagKey, tagValue in unchangedTrackTags.items(): # When a single-track external file substitutes the
# media payload, keep metadata from the regular
# source track unless the external/target side
# overrides it explicitly.
preservedTrackTags = removedTrackTags | unchangedTrackTags
for tagKey, tagValue in preservedTrackTags.items():
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}" metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
+ f":{trackDescriptor.getSubIndex()}", + f":{trackDescriptor.getSubIndex()}",
f"{tagKey}={tagValue}"] f"{tagKey}={tagValue}"]
else:
for removeKey in removedTrackTags.keys():
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
+ f":{trackDescriptor.getSubIndex()}",
f"{removeKey}="]
return metadataTokens return metadataTokens

View File

@@ -0,0 +1,20 @@
"""Load ORM model modules so SQLAlchemy relationship strings can resolve."""
from .show import Base, Show
from .pattern import Pattern
from .track import Track
from .track_tag import TrackTag
from .media_tag import MediaTag
from .shifted_season import ShiftedSeason
from .property import Property
__all__ = [
'Base',
'Show',
'Pattern',
'Track',
'TrackTag',
'MediaTag',
'ShiftedSeason',
'Property',
]

View File

@@ -1,6 +1,8 @@
import subprocess, logging import subprocess
from typing import List from typing import List
from .logging_utils import get_ffx_logger
def executeProcess(commandSequence: List[str], directory: str = None, context: dict = None): def executeProcess(commandSequence: List[str], directory: str = None, context: dict = None):
""" """
niceness -20 bis +19 niceness -20 bis +19
@@ -8,8 +10,7 @@ def executeProcess(commandSequence: List[str], directory: str = None, context: d
""" """
if context is None: if context is None:
logger = logging.getLogger('FFX') logger = get_ffx_logger()
logger.addHandler(logging.NullHandler())
else: else:
logger = context['logger'] logger = context['logger']

View File

@@ -1,4 +1,4 @@
import logging from .logging_utils import get_ffx_logger
class ShowDescriptor(): class ShowDescriptor():
@@ -32,8 +32,7 @@ class ShowDescriptor():
self.__logger = self.__context['logger'] self.__logger = self.__context['logger']
else: else:
self.__context = {} self.__context = {}
self.__logger = logging.getLogger('FFX') self.__logger = get_ffx_logger()
self.__logger.addHandler(logging.NullHandler())
if ShowDescriptor.ID_KEY in kwargs.keys(): if ShowDescriptor.ID_KEY in kwargs.keys():
if type(kwargs[ShowDescriptor.ID_KEY]) is not int: if type(kwargs[ShowDescriptor.ID_KEY]) is not int:

View File

@@ -1,6 +1,8 @@
import os, requests, time, logging import os, requests, time
from datetime import datetime from datetime import datetime
from .logging_utils import get_ffx_logger
class TMDB_REQUEST_EXCEPTION(Exception): class TMDB_REQUEST_EXCEPTION(Exception):
def __init__(self, statusCode, statusMessage): def __init__(self, statusCode, statusMessage):
@@ -27,8 +29,7 @@ class TmdbController():
self.__context = context self.__context = context
if context is None: if context is None:
self.__logger = logging.getLogger('FFX') self.__logger = get_ffx_logger()
self.__logger.addHandler(logging.NullHandler())
else: else:
self.__logger = context['logger'] self.__logger = context['logger']

View File

@@ -1,4 +1,3 @@
import logging
from typing import Self from typing import Self
from .iso_language import IsoLanguage from .iso_language import IsoLanguage
@@ -6,6 +5,7 @@ from .track_type import TrackType
from .audio_layout import AudioLayout from .audio_layout import AudioLayout
from .track_disposition import TrackDisposition from .track_disposition import TrackDisposition
from .track_codec import TrackCodec from .track_codec import TrackCodec
from .logging_utils import get_ffx_logger
# from .helper import dictDiff, setDiff # from .helper import dictDiff, setDiff
@@ -46,8 +46,7 @@ class TrackDescriptor:
self.__logger = self.__context['logger'] self.__logger = self.__context['logger']
else: else:
self.__context = {} self.__context = {}
self.__logger = logging.getLogger('FFX') self.__logger = get_ffx_logger()
self.__logger.addHandler(logging.NullHandler())
if TrackDescriptor.ID_KEY in kwargs.keys(): if TrackDescriptor.ID_KEY in kwargs.keys():
if type(kwargs[TrackDescriptor.ID_KEY]) is not int: if type(kwargs[TrackDescriptor.ID_KEY]) is not int:

View File

@@ -5,6 +5,7 @@ class VideoEncoder(Enum):
AV1 = {'label': 'av1', 'index': 1} AV1 = {'label': 'av1', 'index': 1}
VP9 = {'label': 'vp9', 'index': 2} VP9 = {'label': 'vp9', 'index': 2}
H264 = {'label': 'h264', 'index': 3} H264 = {'label': 'h264', 'index': 3}
COPY = {'label': 'copy', 'index': 4}
UNDEFINED = {'label': 'undefined', 'index': 0} UNDEFINED = {'label': 'undefined', 'index': 0}

1
tests/__init__.py Normal file
View File

@@ -0,0 +1 @@
# Repo-root tests package for legacy and future test code.

View File

@@ -0,0 +1 @@

View File

@@ -0,0 +1 @@

View File

@@ -0,0 +1,283 @@
from __future__ import annotations
from pathlib import Path
import tempfile
import unittest
from tests.support.ffx_bundle import (
PatternTrackSpec,
SourceTrackSpec,
create_source_fixture,
expected_output_path,
extract_first_subtitle_text,
ffprobe_json,
get_tag,
prepare_pattern_database,
run_ffx_convert,
write_vtt,
)
from ffx.track_type import TrackType
try:
import pytest
except ImportError: # pragma: no cover - unittest-only environments
pytest = None
if pytest is not None:
pytestmark = [pytest.mark.integration, pytest.mark.subtrack_mapping]
class SubtrackMappingBundleTests(unittest.TestCase):
def setUp(self):
self.tempdir = tempfile.TemporaryDirectory()
self.workdir = Path(self.tempdir.name)
self.home_dir = self.workdir / "home"
self.home_dir.mkdir()
self.database_path = self.workdir / "test.db"
def tearDown(self):
self.tempdir.cleanup()
def assertCompleted(self, completed):
if completed.returncode != 0:
self.fail(
"FFX convert failed\n"
f"STDOUT:\n{completed.stdout}\n"
f"STDERR:\n{completed.stderr}"
)
def test_pattern_reorders_and_omits_tracks_preserving_metadata_and_group_order(self):
source_filename = "reorder_s01e01.mkv"
source_path = create_source_fixture(
self.workdir,
source_filename,
[
SourceTrackSpec(TrackType.VIDEO, identity="video-0", title="Video Zero"),
SourceTrackSpec(
TrackType.SUBTITLE,
identity="subtitle-1",
language="eng",
title="First Subtitle",
subtitle_lines=("first embedded subtitle",),
),
SourceTrackSpec(
TrackType.AUDIO,
identity="audio-2",
language="deu",
title="German Audio",
),
SourceTrackSpec(
TrackType.SUBTITLE,
identity="subtitle-3",
language="fra",
title="Second Subtitle",
subtitle_lines=("second embedded subtitle",),
),
SourceTrackSpec(TrackType.ATTACHMENT, attachment_name="ordered.ttf"),
],
)
prepare_pattern_database(
self.database_path,
r"^reorder_(s[0-9]+e[0-9]+)\.mkv$",
[
PatternTrackSpec(
index=0,
source_index=0,
track_type=TrackType.VIDEO,
tags={"THIS_IS": "video-0", "title": "Video Zero"},
),
PatternTrackSpec(
index=1,
source_index=2,
track_type=TrackType.AUDIO,
tags={"THIS_IS": "audio-2", "language": "deu", "title": "German Audio"},
),
PatternTrackSpec(
index=2,
source_index=1,
track_type=TrackType.SUBTITLE,
tags={"THIS_IS": "subtitle-1", "language": "eng", "title": "First Subtitle"},
),
],
)
completed = run_ffx_convert(
self.workdir,
self.home_dir,
self.database_path,
"--video-encoder",
"copy",
"--no-tmdb",
"--no-prompt",
"--no-signature",
str(source_path),
)
self.assertCompleted(completed)
output_path = expected_output_path(self.workdir, source_filename)
self.assertTrue(output_path.is_file(), output_path)
streams = ffprobe_json(output_path)["streams"]
self.assertEqual(
[stream["codec_type"] for stream in streams],
["video", "audio", "subtitle", "attachment"],
)
self.assertEqual(
[get_tag(streams[index], "THIS_IS") for index in range(3)],
["video-0", "audio-2", "subtitle-1"],
)
self.assertNotIn(
"subtitle-3",
[get_tag(stream, "THIS_IS") for stream in streams if stream["codec_type"] != "attachment"],
)
self.assertEqual(streams[-1]["codec_name"], "ttf")
extracted_subtitle = extract_first_subtitle_text(self.workdir, output_path)
self.assertIn("first embedded subtitle", extracted_subtitle)
self.assertNotIn("second embedded subtitle", extracted_subtitle)
def test_cli_rearrange_streams_reorders_tracks_without_database_pattern(self):
source_filename = "cli_s01e01.mkv"
source_path = create_source_fixture(
self.workdir,
source_filename,
[
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
SourceTrackSpec(TrackType.AUDIO, identity="audio-1", language="eng", title="First Audio"),
SourceTrackSpec(TrackType.AUDIO, identity="audio-2", language="deu", title="Second Audio"),
SourceTrackSpec(TrackType.SUBTITLE, identity="subtitle-3", language="eng", title="Subtitle"),
],
)
completed = run_ffx_convert(
self.workdir,
self.home_dir,
self.database_path,
"--video-encoder",
"copy",
"--no-pattern",
"--no-tmdb",
"--no-prompt",
"--no-signature",
"--rearrange-streams",
"0,2,1,3",
str(source_path),
)
self.assertCompleted(completed)
output_path = expected_output_path(self.workdir, source_filename)
streams = ffprobe_json(output_path)["streams"]
self.assertEqual(
[stream["codec_type"] for stream in streams],
["video", "audio", "audio", "subtitle"],
)
self.assertEqual(
[get_tag(stream, "THIS_IS") for stream in streams],
["video-0", "audio-2", "audio-1", "subtitle-3"],
)
def test_pattern_validation_fails_for_nonexistent_source_track_reference(self):
source_filename = "invalid_s01e01.mkv"
source_path = create_source_fixture(
self.workdir,
source_filename,
[
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
SourceTrackSpec(TrackType.AUDIO, identity="audio-1"),
SourceTrackSpec(TrackType.SUBTITLE, identity="subtitle-2"),
],
)
prepare_pattern_database(
self.database_path,
r"^invalid_(s[0-9]+e[0-9]+)\.mkv$",
[
PatternTrackSpec(index=0, source_index=0, track_type=TrackType.VIDEO),
PatternTrackSpec(index=1, source_index=99, track_type=TrackType.SUBTITLE),
],
)
completed = run_ffx_convert(
self.workdir,
self.home_dir,
self.database_path,
"--video-encoder",
"copy",
"--no-tmdb",
"--no-prompt",
"--no-signature",
str(source_path),
)
self.assertNotEqual(completed.returncode, 0)
error_output = f"{completed.stdout}\n{completed.stderr}"
self.assertIn("non-existent source track #99", error_output)
self.assertFalse(expected_output_path(self.workdir, source_filename).exists())
def test_external_subtitle_file_replaces_payload_and_overrides_metadata(self):
source_filename = "substitute_s01e01.mkv"
source_path = create_source_fixture(
self.workdir,
source_filename,
[
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
SourceTrackSpec(TrackType.AUDIO, identity="audio-1", language="eng", title="Main Audio"),
SourceTrackSpec(
TrackType.SUBTITLE,
identity="embedded-subtitle",
language="eng",
title="Embedded Title",
subtitle_lines=("embedded subtitle payload",),
),
],
)
write_vtt(
self.workdir / "substitute_s01e01_2_deu.vtt",
("external subtitle payload",),
)
prepare_pattern_database(
self.database_path,
r"^substitute_(s[0-9]+e[0-9]+)\.mkv$",
[
PatternTrackSpec(index=0, source_index=0, track_type=TrackType.VIDEO),
PatternTrackSpec(index=1, source_index=1, track_type=TrackType.AUDIO),
PatternTrackSpec(index=2, source_index=2, track_type=TrackType.SUBTITLE),
],
)
completed = run_ffx_convert(
self.workdir,
self.home_dir,
self.database_path,
"--video-encoder",
"copy",
"--no-tmdb",
"--no-prompt",
"--no-signature",
"--subtitle-directory",
str(self.workdir),
"--subtitle-prefix",
"substitute",
str(source_path),
)
self.assertCompleted(completed)
output_path = expected_output_path(self.workdir, source_filename)
streams = ffprobe_json(output_path)["streams"]
subtitle_stream = [stream for stream in streams if stream["codec_type"] == "subtitle"][0]
self.assertEqual(get_tag(subtitle_stream, "language"), "deu")
self.assertEqual(get_tag(subtitle_stream, "title"), "Embedded Title")
self.assertEqual(get_tag(subtitle_stream, "THIS_IS"), "embedded-subtitle")
extracted_subtitle = extract_first_subtitle_text(self.workdir, output_path)
self.assertIn("external subtitle payload", extracted_subtitle)
self.assertNotIn("embedded subtitle payload", extracted_subtitle)
if __name__ == "__main__":
unittest.main()

1
tests/legacy/__init__.py Normal file
View File

@@ -0,0 +1 @@
# Legacy custom FFX test harness modules.

View File

@@ -24,8 +24,9 @@ class BasenameCombinator():
@staticmethod @staticmethod
def getClassReference(identifier): def getClassReference(identifier):
importlib.import_module(f"ffx.test.basename_combinator_{ identifier }") module_name = f"tests.legacy.basename_combinator_{ identifier }"
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.basename_combinator_{ identifier }"]): importlib.import_module(module_name)
for name, obj in inspect.getmembers(sys.modules[module_name]):
#HINT: Excluding MediaCombinator as it seems to be included by import (?) #HINT: Excluding MediaCombinator as it seems to be included by import (?)
if inspect.isclass(obj) and name != 'BasenameCombinator' and name.startswith('BasenameCombinator'): if inspect.isclass(obj) and name != 'BasenameCombinator' and name.startswith('BasenameCombinator'):
return obj return obj

View File

@@ -24,8 +24,9 @@ class DispositionCombinator2():
@staticmethod @staticmethod
def getClassReference(identifier): def getClassReference(identifier):
importlib.import_module(f"ffx.test.disposition_combinator_2_{ identifier }") module_name = f"tests.legacy.disposition_combinator_2_{ identifier }"
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.disposition_combinator_2_{ identifier }"]): importlib.import_module(module_name)
for name, obj in inspect.getmembers(sys.modules[module_name]):
#HINT: Excluding DispositionCombination as it seems to be included by import (?) #HINT: Excluding DispositionCombination as it seems to be included by import (?)
if inspect.isclass(obj) and name != 'DispositionCombinator2' and name.startswith('DispositionCombinator2'): if inspect.isclass(obj) and name != 'DispositionCombinator2' and name.startswith('DispositionCombinator2'):
return obj return obj

View File

@@ -23,8 +23,9 @@ class DispositionCombinator3():
@staticmethod @staticmethod
def getClassReference(identifier): def getClassReference(identifier):
importlib.import_module(f"ffx.test.disposition_combinator_3_{ identifier }") module_name = f"tests.legacy.disposition_combinator_3_{ identifier }"
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.disposition_combinator_3_{ identifier }"]): importlib.import_module(module_name)
for name, obj in inspect.getmembers(sys.modules[module_name]):
#HINT: Excluding DispositionCombination as it seems to be included by import (?) #HINT: Excluding DispositionCombination as it seems to be included by import (?)
if inspect.isclass(obj) and name != 'DispositionCombinator3' and name.startswith('DispositionCombinator3'): if inspect.isclass(obj) and name != 'DispositionCombinator3' and name.startswith('DispositionCombinator3'):
return obj return obj

View File

@@ -1,11 +1,9 @@
import os, math, tempfile, click import os, math, tempfile, click
from ffx.ffx_controller import FfxController
from ffx.process import executeProcess from ffx.process import executeProcess
from ffx.media_descriptor import MediaDescriptor from ffx.media_descriptor import MediaDescriptor
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
from ffx.track_type import TrackType from ffx.track_type import TrackType
from ffx.helper import dictCache from ffx.helper import dictCache
@@ -149,7 +147,6 @@ def createMediaTestFile(mediaDescriptor: MediaDescriptor,
# subtitleFilePath = createVttFile(SHORT_SUBTITLE_SEQUENCE) # subtitleFilePath = createVttFile(SHORT_SUBTITLE_SEQUENCE)
# commandTokens = FfxController.COMMAND_TOKENS
commandTokens = ['ffmpeg', '-y'] commandTokens = ['ffmpeg', '-y']
generatorCache = [] generatorCache = []
@@ -232,15 +229,14 @@ def createMediaTestFile(mediaDescriptor: MediaDescriptor,
f"{mediaTagKey}={mediaTagValue}"] f"{mediaTagKey}={mediaTagValue}"]
subIndexCounter[trackType] += 1 subIndexCounter[trackType] += 1
#TODO: Optimize too many runs
ffxContext = {'config': ConfigurationController(), 'logger': logger} ffxContext = {'config': ConfigurationController(), 'logger': logger}
fc = FfxController(ffxContext, mediaDescriptor) mdcs = MediaDescriptorChangeSet(ffxContext, mediaDescriptor)
commandTokens += (generatorTokens commandTokens += (generatorTokens
+ importTokens + importTokens
+ mappingTokens + mappingTokens
+ metadataTokens + metadataTokens
+ fc.generateDispositionTokens()) + mdcs.generateDispositionTokens())
commandTokens += ['-t', str(length)] commandTokens += ['-t', str(length)]

View File

@@ -25,8 +25,9 @@ class LabelCombinator():
@staticmethod @staticmethod
def getClassReference(identifier): def getClassReference(identifier):
importlib.import_module(f"ffx.test.{LabelCombinator.PREFIX}{ identifier }") module_name = f"tests.legacy.{LabelCombinator.PREFIX}{ identifier }"
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.{LabelCombinator.PREFIX}{ identifier }"]): importlib.import_module(module_name)
for name, obj in inspect.getmembers(sys.modules[module_name]):
#HINT: Excluding MediaCombinator as it seems to be included by import (?) #HINT: Excluding MediaCombinator as it seems to be included by import (?)
if inspect.isclass(obj) and name != 'LabelCombinator' and name.startswith('LabelCombinator'): if inspect.isclass(obj) and name != 'LabelCombinator' and name.startswith('LabelCombinator'):
return obj return obj

View File

@@ -22,8 +22,9 @@ class MediaCombinator():
@staticmethod @staticmethod
def getClassReference(identifier): def getClassReference(identifier):
importlib.import_module(f"ffx.test.media_combinator_{ identifier }") module_name = f"tests.legacy.media_combinator_{ identifier }"
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.media_combinator_{ identifier }"]): importlib.import_module(module_name)
for name, obj in inspect.getmembers(sys.modules[module_name]):
#HINT: Excluding MediaCombinator as it seems to be included by import (?) #HINT: Excluding MediaCombinator as it seems to be included by import (?)
if inspect.isclass(obj) and name != 'MediaCombinator' and name.startswith('MediaCombinator'): if inspect.isclass(obj) and name != 'MediaCombinator' and name.startswith('MediaCombinator'):
return obj return obj

View File

@@ -22,8 +22,9 @@ class MediaTagCombinator():
@staticmethod @staticmethod
def getClassReference(identifier): def getClassReference(identifier):
importlib.import_module(f"ffx.test.media_tag_combinator_{ identifier }") module_name = f"tests.legacy.media_tag_combinator_{ identifier }"
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.media_tag_combinator_{ identifier }"]): importlib.import_module(module_name)
for name, obj in inspect.getmembers(sys.modules[module_name]):
#HINT: Excluding MediaCombinator as it seems to be included by import (?) #HINT: Excluding MediaCombinator as it seems to be included by import (?)
if inspect.isclass(obj) and name != 'MediaTagCombinator' and name.startswith('MediaTagCombinator'): if inspect.isclass(obj) and name != 'MediaTagCombinator' and name.startswith('MediaTagCombinator'):
return obj return obj

View File

@@ -4,7 +4,7 @@ from ffx.show_controller import ShowController
from ffx.pattern_controller import PatternController from ffx.pattern_controller import PatternController
from ffx.media_controller import MediaController from ffx.media_controller import MediaController
from ffx.test.helper import createEmptyDirectory from .helper import createEmptyDirectory
from ffx.database import databaseContext from ffx.database import databaseContext
class Scenario(): class Scenario():
@@ -90,11 +90,7 @@ class Scenario():
def __init__(self, context = None): def __init__(self, context = None):
self._context = context self._context = context
self._testDirectory = createEmptyDirectory() self._testDirectory = createEmptyDirectory()
self._ffxExecutablePath = os.path.join( self._ffxModuleName = 'ffx'
os.path.dirname(
os.path.dirname(
os.path.dirname(__file__))),
'ffx.py')
self._logger = context['logger'] self._logger = context['logger']
self._reportLogger = context['report_logger'] self._reportLogger = context['report_logger']
@@ -146,8 +142,9 @@ class Scenario():
@staticmethod @staticmethod
def getClassReference(identifier): def getClassReference(identifier):
importlib.import_module(f"ffx.test.scenario_{ identifier }") module_name = f"tests.legacy.scenario_{ identifier }"
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.scenario_{ identifier }"]): importlib.import_module(module_name)
for name, obj in inspect.getmembers(sys.modules[module_name]):
#HINT: Excluding Scenario as it seems to be included by import (?) #HINT: Excluding Scenario as it seems to be included by import (?)
if inspect.isclass(obj) and name != 'Scenario' and name.startswith('Scenario'): if inspect.isclass(obj) and name != 'Scenario' and name.startswith('Scenario'):
return obj return obj

View File

@@ -2,7 +2,7 @@ import os, sys, click, glob
from .scenario import Scenario from .scenario import Scenario
from ffx.test.helper import createMediaTestFile from .helper import createMediaTestFile
from ffx.process import executeProcess from ffx.process import executeProcess
from ffx.file_properties import FileProperties from ffx.file_properties import FileProperties
@@ -13,9 +13,9 @@ from ffx.track_descriptor import TrackDescriptor
from ffx.track_type import TrackType from ffx.track_type import TrackType
from ffx.track_disposition import TrackDisposition from ffx.track_disposition import TrackDisposition
from ffx.test.media_combinator_0 import MediaCombinator0 from .media_combinator_0 import MediaCombinator0
from ffx.test.basename_combinator import BasenameCombinator from .basename_combinator import BasenameCombinator
class Scenario1(Scenario): class Scenario1(Scenario):
@@ -92,8 +92,7 @@ class Scenario1(Scenario):
# Phase 2: Run ffx # Phase 2: Run ffx
commandSequence = [sys.executable, commandSequence = [sys.executable, '-m', self._ffxModuleName]
self._ffxExecutablePath]
if self._context['verbosity']: if self._context['verbosity']:
commandSequence += ['--verbose', commandSequence += ['--verbose',

View File

@@ -2,7 +2,7 @@ import os, sys, click
from .scenario import Scenario from .scenario import Scenario
from ffx.test.helper import createMediaTestFile from .helper import createMediaTestFile
from ffx.process import executeProcess from ffx.process import executeProcess
from ffx.file_properties import FileProperties from ffx.file_properties import FileProperties
@@ -13,7 +13,7 @@ from ffx.track_descriptor import TrackDescriptor
from ffx.track_type import TrackType from ffx.track_type import TrackType
from ffx.track_disposition import TrackDisposition from ffx.track_disposition import TrackDisposition
from ffx.test.media_combinator import MediaCombinator from .media_combinator import MediaCombinator
class Scenario2(Scenario): class Scenario2(Scenario):
@@ -77,8 +77,7 @@ class Scenario2(Scenario):
# Phase 2: Run ffx # Phase 2: Run ffx
commandSequence = [sys.executable, commandSequence = [sys.executable, '-m', self._ffxModuleName]
self._ffxExecutablePath]
if self._context['verbosity']: if self._context['verbosity']:
commandSequence += ['--verbose', commandSequence += ['--verbose',

View File

@@ -2,11 +2,11 @@ import os, sys, click
from .scenario import Scenario from .scenario import Scenario
from ffx.test.helper import createMediaTestFile from .helper import createMediaTestFile
from ffx.process import executeProcess from ffx.process import executeProcess
from ffx.database import databaseContext from ffx.database import databaseContext
from ffx.test.helper import createEmptyDirectory from .helper import createEmptyDirectory
from ffx.helper import getEpisodeFileBasename from ffx.helper import getEpisodeFileBasename
from ffx.file_properties import FileProperties from ffx.file_properties import FileProperties
@@ -17,8 +17,8 @@ from ffx.track_descriptor import TrackDescriptor
from ffx.track_type import TrackType from ffx.track_type import TrackType
from ffx.track_disposition import TrackDisposition from ffx.track_disposition import TrackDisposition
from ffx.test.media_combinator import MediaCombinator from .media_combinator import MediaCombinator
from ffx.test.indicator_combinator import IndicatorCombinator from .indicator_combinator import IndicatorCombinator
from ffx.show_descriptor import ShowDescriptor from ffx.show_descriptor import ShowDescriptor
@@ -163,8 +163,7 @@ class Scenario4(Scenario):
# Phase 3: Run ffx # Phase 3: Run ffx
commandSequence = [sys.executable, commandSequence = [sys.executable, '-m', self._ffxModuleName]
self._ffxExecutablePath]
if self._context['verbosity']: if self._context['verbosity']:
commandSequence += ['--verbose', commandSequence += ['--verbose',

View File

@@ -22,8 +22,9 @@ class TrackTagCombinator2():
@staticmethod @staticmethod
def getClassReference(identifier): def getClassReference(identifier):
importlib.import_module(f"ffx.test.track_tag_combinator_2_{ identifier }") module_name = f"tests.legacy.track_tag_combinator_2_{ identifier }"
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.track_tag_combinator_2_{ identifier }"]): importlib.import_module(module_name)
for name, obj in inspect.getmembers(sys.modules[module_name]):
#HINT: Excluding DispositionCombination as it seems to be included by import (?) #HINT: Excluding DispositionCombination as it seems to be included by import (?)
if inspect.isclass(obj) and name != 'TrackTagCombinator2' and name.startswith('TrackTagCombinator2'): if inspect.isclass(obj) and name != 'TrackTagCombinator2' and name.startswith('TrackTagCombinator2'):
return obj return obj

View File

@@ -22,8 +22,9 @@ class TrackTagCombinator3():
@staticmethod @staticmethod
def getClassReference(identifier): def getClassReference(identifier):
importlib.import_module(f"ffx.test.track_tag_combinator_3_{ identifier }") module_name = f"tests.legacy.track_tag_combinator_3_{ identifier }"
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.track_tag_combinator_3_{ identifier }"]): importlib.import_module(module_name)
for name, obj in inspect.getmembers(sys.modules[module_name]):
#HINT: Excluding DispositionCombination as it seems to be included by import (?) #HINT: Excluding DispositionCombination as it seems to be included by import (?)
if inspect.isclass(obj) and name != 'TrackTagCombinator3' and name.startswith('TrackTagCombinator3'): if inspect.isclass(obj) and name != 'TrackTagCombinator3' and name.startswith('TrackTagCombinator3'):
return obj return obj

View File

@@ -1,15 +1,33 @@
#! /usr/bin/python3 #! /usr/bin/python3
import os, logging, click import os, sys, logging, click
# Allow direct execution from the source tree by exposing both the repository
# root for `tests.*` imports and `src/` for `ffx.*` imports.
script_dir = os.path.dirname(os.path.abspath(__file__))
repo_root = os.path.dirname(script_dir)
src_root = os.path.join(repo_root, 'src')
sys.path = [p for p in sys.path if os.path.abspath(p) != script_dir]
for path in [repo_root, src_root]:
if path not in sys.path:
sys.path.insert(0, path)
existing_pythonpath = [p for p in os.environ.get('PYTHONPATH', '').split(os.pathsep) if p]
pythonpath_entries = []
for path in [src_root, repo_root] + existing_pythonpath:
if path not in pythonpath_entries:
pythonpath_entries.append(path)
os.environ['PYTHONPATH'] = os.pathsep.join(pythonpath_entries)
from ffx.configuration_controller import ConfigurationController from ffx.configuration_controller import ConfigurationController
from ffx.file_properties import FileProperties from ffx.file_properties import FileProperties
from ffx.ffx_controller import FfxController from ffx.ffx_controller import FfxController
from ffx.test.helper import createMediaTestFile from tests.legacy.helper import createMediaTestFile
from ffx.test.scenario import Scenario from tests.legacy.scenario import Scenario
from ffx.tmdb_controller import TMDB_API_KEY_NOT_PRESENT_EXCEPTION from ffx.tmdb_controller import TMDB_API_KEY_NOT_PRESENT_EXCEPTION

View File

@@ -0,0 +1 @@

337
tests/support/ffx_bundle.py Normal file
View File

@@ -0,0 +1,337 @@
from __future__ import annotations
from dataclasses import dataclass, field
import json
import logging
import os
from pathlib import Path
import subprocess
import sys
from typing import Mapping
REPO_ROOT = Path(__file__).resolve().parents[2]
SRC_ROOT = REPO_ROOT / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx.audio_layout import AudioLayout
from ffx.database import databaseContext
from ffx.pattern_controller import PatternController
from ffx.show_controller import ShowController
from ffx.show_descriptor import ShowDescriptor
from ffx.track_controller import TrackController
from ffx.track_descriptor import TrackDescriptor
from ffx.track_disposition import TrackDisposition
from ffx.track_type import TrackType
class StaticConfig:
def __init__(self, data: dict | None = None):
self._data = data or {}
def getData(self):
return self._data
@dataclass(frozen=True)
class SourceTrackSpec:
track_type: TrackType
identity: str | None = None
language: str | None = None
title: str | None = None
extra_tags: Mapping[str, str] = field(default_factory=dict)
dispositions: tuple[TrackDisposition, ...] = ()
subtitle_lines: tuple[str, ...] = ("subtitle line",)
attachment_name: str = "fixture.ttf"
@dataclass(frozen=True)
class PatternTrackSpec:
index: int
source_index: int
track_type: TrackType
tags: Mapping[str, str] = field(default_factory=dict)
dispositions: tuple[TrackDisposition, ...] = ()
audio_layout: AudioLayout = AudioLayout.LAYOUT_STEREO
def make_logger(name: str) -> logging.Logger:
logger = logging.getLogger(name)
logger.handlers = []
logger.setLevel(logging.DEBUG)
logger.propagate = False
logger.addHandler(logging.NullHandler())
return logger
def build_controller_context(database_path: Path) -> dict:
return {
"logger": make_logger(f"ffx-test-db-{database_path.stem}"),
"config": StaticConfig(),
"database": databaseContext(str(database_path)),
}
def dispose_controller_context(context: dict) -> None:
context["database"]["engine"].dispose()
def write_vtt(path: Path, lines: tuple[str, ...]) -> Path:
body = ["WEBVTT", ""]
for index, line in enumerate(lines):
start_ms = index * 600
end_ms = start_ms + 500
body.extend(
[
f"{start_ms // 3600000:02d}:{(start_ms // 60000) % 60:02d}:{(start_ms // 1000) % 60:02d}.{start_ms % 1000:03d} --> "
+ f"{end_ms // 3600000:02d}:{(end_ms // 60000) % 60:02d}:{(end_ms // 1000) % 60:02d}.{end_ms % 1000:03d}",
line,
"",
]
)
path.write_text("\n".join(body), encoding="utf-8")
return path
def create_source_fixture(workdir: Path, filename: str, tracks: list[SourceTrackSpec], duration_seconds: int = 1) -> Path:
output_path = workdir / filename
has_video = any(track.track_type == TrackType.VIDEO for track in tracks)
has_audio = any(track.track_type == TrackType.AUDIO for track in tracks)
command = ["ffmpeg", "-y"]
input_indices: dict[str, int] = {}
next_input_index = 0
if has_video:
command += ["-f", "lavfi", "-i", "color=size=96x54:rate=2:color=black"]
input_indices["video"] = next_input_index
next_input_index += 1
if has_audio:
command += ["-f", "lavfi", "-i", "anullsrc=channel_layout=stereo:sample_rate=48000"]
input_indices["audio"] = next_input_index
next_input_index += 1
subtitle_input_indices: list[int] = []
subtitle_counter = 0
for track in tracks:
if track.track_type == TrackType.SUBTITLE:
subtitle_path = write_vtt(
workdir / f"{output_path.stem}_subtitle_{subtitle_counter}.vtt",
track.subtitle_lines,
)
command += ["-i", str(subtitle_path)]
subtitle_input_indices.append(next_input_index)
next_input_index += 1
subtitle_counter += 1
map_tokens: list[str] = []
metadata_tokens: list[str] = []
disposition_tokens: list[str] = []
attachment_tokens: list[str] = []
per_type_subindex: dict[TrackType, int] = {}
subtitle_input_cursor = 0
attachment_subindex = 0
for track in tracks:
if track.track_type == TrackType.VIDEO:
map_tokens += ["-map", f"{input_indices['video']}:v:0"]
stream_group = "v"
elif track.track_type == TrackType.AUDIO:
map_tokens += ["-map", f"{input_indices['audio']}:a:0"]
stream_group = "a"
elif track.track_type == TrackType.SUBTITLE:
map_tokens += ["-map", f"{subtitle_input_indices[subtitle_input_cursor]}:s:0"]
subtitle_input_cursor += 1
stream_group = "s"
elif track.track_type == TrackType.ATTACHMENT:
attachment_path = workdir / track.attachment_name
attachment_path.write_bytes(b"dummy font bytes")
attachment_tokens += [
"-attach",
str(attachment_path),
f"-metadata:s:t:{attachment_subindex}",
"mimetype=application/x-truetype-font",
f"-metadata:s:t:{attachment_subindex}",
f"filename={attachment_path.name}",
]
attachment_subindex += 1
continue
else:
raise ValueError(f"Unsupported track type {track.track_type}")
subindex = per_type_subindex.get(track.track_type, 0)
per_type_subindex[track.track_type] = subindex + 1
tags = {}
if track.identity is not None:
tags["THIS_IS"] = track.identity
if track.language is not None:
tags["language"] = track.language
if track.title is not None:
tags["title"] = track.title
tags.update(track.extra_tags)
for key, value in tags.items():
metadata_tokens += [f"-metadata:s:{stream_group}:{subindex}", f"{key}={value}"]
if track.dispositions:
disposition_tokens += [
f"-disposition:{stream_group}:{subindex}",
"+".join(disposition.label() for disposition in track.dispositions),
]
command += map_tokens
command += metadata_tokens
command += disposition_tokens
command += [
"-c:v",
"libx264",
"-preset",
"ultrafast",
"-crf",
"35",
"-pix_fmt",
"yuv420p",
"-c:a",
"aac",
"-b:a",
"48k",
"-c:s",
"webvtt",
"-t",
str(duration_seconds),
"-shortest",
]
command += attachment_tokens
command += [str(output_path)]
completed = subprocess.run(command, cwd=workdir, capture_output=True, text=True)
if completed.returncode != 0:
raise AssertionError(f"ffmpeg fixture creation failed\nSTDOUT:\n{completed.stdout}\nSTDERR:\n{completed.stderr}")
return output_path
def add_show_and_pattern(context: dict, filename_pattern: str, show_id: int = 1) -> int:
show_descriptor = ShowDescriptor(
id=show_id,
name="Bundle Test Show",
year=2000,
)
ShowController(context).updateShow(show_descriptor)
pattern_id = PatternController(context).addPattern(
{
"show_id": show_id,
"pattern": filename_pattern,
}
)
if not pattern_id:
raise AssertionError("Failed to create pattern in test database")
return pattern_id
def add_pattern_tracks(context: dict, pattern_id: int, track_specs: list[PatternTrackSpec]) -> None:
track_controller = TrackController(context)
for track in track_specs:
kwargs = {
TrackDescriptor.INDEX_KEY: track.index,
TrackDescriptor.SOURCE_INDEX_KEY: track.source_index,
TrackDescriptor.TRACK_TYPE_KEY: track.track_type,
TrackDescriptor.TAGS_KEY: dict(track.tags),
TrackDescriptor.DISPOSITION_SET_KEY: set(track.dispositions),
}
if track.track_type == TrackType.AUDIO:
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = track.audio_layout
track_controller.addTrack(TrackDescriptor(**kwargs), pattern_id)
def prepare_pattern_database(database_path: Path, filename_pattern: str, track_specs: list[PatternTrackSpec], show_id: int = 1) -> None:
context = build_controller_context(database_path)
try:
pattern_id = add_show_and_pattern(context, filename_pattern, show_id=show_id)
add_pattern_tracks(context, pattern_id, track_specs)
finally:
dispose_controller_context(context)
def run_ffx_convert(workdir: Path, home_dir: Path, database_path: Path, *args: str) -> subprocess.CompletedProcess[str]:
env = os.environ.copy()
env["HOME"] = str(home_dir)
existing_pythonpath = env.get("PYTHONPATH", "")
env["PYTHONPATH"] = str(SRC_ROOT) if not existing_pythonpath else f"{SRC_ROOT}{os.pathsep}{existing_pythonpath}"
command = [
sys.executable,
"-m",
"ffx",
"--database-file",
str(database_path),
"convert",
*args,
]
return subprocess.run(command, cwd=workdir, env=env, capture_output=True, text=True)
def ffprobe_json(path: Path) -> dict:
completed = subprocess.run(
[
"ffprobe",
"-hide_banner",
"-show_streams",
"-show_format",
"-of",
"json",
str(path),
],
capture_output=True,
text=True,
)
if completed.returncode != 0:
raise AssertionError(f"ffprobe failed for {path}\nSTDERR:\n{completed.stderr}")
return json.loads(completed.stdout)
def stream_tags(stream: dict) -> dict[str, str]:
return {str(key): str(value) for key, value in stream.get("tags", {}).items()}
def get_tag(stream: dict, key: str) -> str | None:
tags = stream_tags(stream)
for candidate in (key, key.lower(), key.upper()):
if candidate in tags:
return tags[candidate]
return None
def extract_first_subtitle_text(workdir: Path, media_path: Path) -> str:
extracted_path = workdir / f"{media_path.stem}.subtitle.vtt"
completed = subprocess.run(
[
"ffmpeg",
"-y",
"-i",
str(media_path),
"-map",
"0:s:0",
"-c",
"copy",
str(extracted_path),
],
cwd=workdir,
capture_output=True,
text=True,
)
if completed.returncode != 0:
raise AssertionError(f"Subtitle extraction failed\nSTDERR:\n{completed.stderr}")
return extracted_path.read_text(encoding="utf-8")
def expected_output_path(workdir: Path, source_filename: str) -> Path:
return workdir / f"out_{source_filename}"

1
tests/unit/__init__.py Normal file
View File

@@ -0,0 +1 @@

View File

@@ -0,0 +1,86 @@
from __future__ import annotations
import logging
from pathlib import Path
import sys
import tempfile
import unittest
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx.logging_utils import ( # noqa: E402
CONSOLE_HANDLER_NAME,
FILE_HANDLER_NAME,
configure_ffx_logger,
get_ffx_logger,
)
class LoggingUtilsTests(unittest.TestCase):
def cleanup_logger(self, logger_name: str) -> None:
logger = logging.getLogger(logger_name)
for handler in list(logger.handlers):
logger.removeHandler(handler)
handler.close()
def test_get_ffx_logger_adds_only_one_null_handler(self):
logger_name = "ffx-test-null-handler"
self.cleanup_logger(logger_name)
logger = get_ffx_logger(logger_name)
logger = get_ffx_logger(logger_name)
null_handlers = [
handler for handler in logger.handlers if isinstance(handler, logging.NullHandler)
]
self.assertEqual(1, len(null_handlers))
self.cleanup_logger(logger_name)
def test_configure_ffx_logger_reuses_named_handlers(self):
logger_name = "ffx-test-configure-handler"
self.cleanup_logger(logger_name)
with tempfile.TemporaryDirectory() as tempdir:
first_log_path = Path(tempdir) / "first.log"
second_log_path = Path(tempdir) / "second.log"
logger = configure_ffx_logger(
str(first_log_path),
logging.ERROR,
logging.INFO,
name=logger_name,
)
logger = configure_ffx_logger(
str(second_log_path),
logging.DEBUG,
logging.WARNING,
name=logger_name,
)
console_handlers = [
handler for handler in logger.handlers if handler.get_name() == CONSOLE_HANDLER_NAME
]
file_handlers = [
handler for handler in logger.handlers if handler.get_name() == FILE_HANDLER_NAME
]
self.assertEqual(1, len(console_handlers))
self.assertEqual(1, len(file_handlers))
self.assertFalse(
any(isinstance(handler, logging.NullHandler) for handler in logger.handlers)
)
self.assertEqual(logging.WARNING, console_handlers[0].level)
self.assertEqual(logging.DEBUG, file_handlers[0].level)
self.assertEqual(str(second_log_path.resolve()), file_handlers[0].baseFilename)
self.cleanup_logger(logger_name)
if __name__ == "__main__":
unittest.main()