Compare commits
19 Commits
v0.2.6
...
b80c055826
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b80c055826 | ||
|
|
c5fc6ac13d | ||
|
|
1bead05d19 | ||
|
|
9fe2a842e9 | ||
|
|
849d03d054 | ||
|
|
3a87bbbba6 | ||
|
|
ab5e8e53e1 | ||
|
|
0ab2408444 | ||
|
|
bc1e0889e7 | ||
|
|
6dfbe1022a | ||
|
|
d3d2de8a0d | ||
|
|
0728ece4b8 | ||
|
|
02e375fbf2 | ||
|
|
14e6ce8458 | ||
|
|
d921629947 | ||
|
|
65490e2a7f | ||
|
|
6c5b518e4d | ||
|
|
e3c18f22d4 | ||
|
|
57185c7f10 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -10,6 +10,8 @@ tools/ansible/inventory/group_vars/all.yml
|
||||
ffx_test_report.log
|
||||
bin/conversiontest.py
|
||||
|
||||
tests/assets/
|
||||
|
||||
build/
|
||||
dist/
|
||||
*.egg-info/
|
||||
@@ -20,5 +22,6 @@ venv/
|
||||
|
||||
*.mkv
|
||||
*.webm
|
||||
*.mp4
|
||||
ffmpeg2pass-0.log
|
||||
*.sup
|
||||
376
AGENTS.md
Normal file
376
AGENTS.md
Normal file
@@ -0,0 +1,376 @@
|
||||
# AGENTS.md
|
||||
|
||||
This file is the entry point for agent guidance in this repository.
|
||||
|
||||
It is intentionally generic and reusable across projects. Keep this file focused on non-project-specific constraints, working style, and the structure used to link more detailed guidance.
|
||||
|
||||
# Purpose
|
||||
|
||||
- Provide a small default rule set for agents working in this repository.
|
||||
- Keep the base guidance modular and easy to extend.
|
||||
- Separate reusable agent behavior from project-specific requirements.
|
||||
|
||||
# Comment Syntax
|
||||
|
||||
- A segment wrapped in `<!--` and `-->` is a comment and must be ignored by agents.
|
||||
- Use HTML comments for optional guidance that should stay inactive until enabled.
|
||||
- To enable an optional segment, remove the surrounding `<!--` and `-->` markers.
|
||||
|
||||
# Core Principles
|
||||
|
||||
- Prefer the simplest solution that satisfies the current goal.
|
||||
- Keep guidance lightweight: only add detail when it meaningfully improves outcomes.
|
||||
- Reuse modular guideline files instead of expanding this file indefinitely.
|
||||
- Treat project-specific documents as the source of truth for project behavior.
|
||||
- When guidance conflicts, use the most specific applicable document.
|
||||
|
||||
# Rule Terms
|
||||
|
||||
- A `rule` is the general term for any constraint, requirement, definition, or similar guidance item.
|
||||
- A `rule set` addresses all rules inside one file that share the same rule set ID.
|
||||
- Any rule inside a rule set shall use an ID following the schema `RULESET-0001`, `RULESET-0002`, and so on.
|
||||
- Rules without a rule set ID are also valid, but they are not addressable by rule ID.
|
||||
|
||||
# Scope Of This File
|
||||
|
||||
This file should contain:
|
||||
|
||||
- Generic agent behavior and constraints.
|
||||
- Rules that are reusable across multiple projects.
|
||||
- Links to optional guideline modules.
|
||||
- Links to project-specific requirements.
|
||||
- Commented optional templates for released-product documentation and agent-output locations.
|
||||
|
||||
This file should not contain:
|
||||
|
||||
- Project business requirements.
|
||||
- Project architecture decisions.
|
||||
- Stack-specific implementation details unless they are universally applicable.
|
||||
- Task-specific runbooks that belong in dedicated modules.
|
||||
|
||||
# Default Agent Behavior
|
||||
|
||||
- Read the relevant context before making changes.
|
||||
- Prefer small, understandable edits over broad refactors.
|
||||
- Preserve existing patterns unless there is a clear reason to change them.
|
||||
- Document assumptions when context is missing.
|
||||
- Ignore HTML comment segments.
|
||||
- If a more specific enabled guideline exists for the current task, follow it.
|
||||
|
||||
# Guideline Structure
|
||||
|
||||
Use the following structure for reusable guidance files and project-specific documentation as needed:
|
||||
|
||||
```text
|
||||
/
|
||||
|-- AGENTS.md
|
||||
|-- guidance/
|
||||
| |-- stacks/
|
||||
| |-- conventions/
|
||||
| `-- workflows/
|
||||
|-- prompts/
|
||||
`-- requirements/
|
||||
|
||||
Optional files and directories
|
||||
|-- SCRATCHPAD.md
|
||||
|-- docs/
|
||||
| |-- readme.md
|
||||
| |-- installation.md
|
||||
| `-- history.md
|
||||
|-- process/
|
||||
| |-- log.md
|
||||
| `-- coding-handbook.md
|
||||
```
|
||||
|
||||
# Optional Reusable Modules
|
||||
|
||||
Add files under `guidance/` only when they are needed.
|
||||
|
||||
# Optional Scratchpad
|
||||
|
||||
- `SCRATCHPAD.md` is an optional repo-root scratchpad for temporary
|
||||
information aimed at the next iteration.
|
||||
- Developers may create or delete `SCRATCHPAD.md` at any time.
|
||||
- Developers may refer to `SCRATCHPAD.md` as `scratchpad` when giving agents a
|
||||
source or target for information.
|
||||
- Agents may read, update, create, or remove the scratchpad when the task
|
||||
explicitly calls for it.
|
||||
- Treat the scratchpad as low-formality working context rather than canonical
|
||||
project truth.
|
||||
- Use the scratchpad for short-lived notes, open questions, sketches, and
|
||||
temporary decisions that should be resolved away.
|
||||
- Move durable outcomes into `requirements/`, `guidance/`, code, tests, or
|
||||
another long-lived location.
|
||||
- If `SCRATCHPAD.md` is absent, agents should continue normally.
|
||||
|
||||
# Optional Rule Sets
|
||||
|
||||
- Optional rule sets may be stored in `guidance/optional/` or in `guidance/{section}/optional/`.
|
||||
- Optional rule sets are inactive by default and shall only be applied when a prompt explicitly requests them, for example by phrases such as `Apply rules for lean interface iteration in the following steps.` or `Apply LII rules.`
|
||||
- An optional rule set may be requested by its descriptive name, by its rule set ID, or by another equally clear explicit reference.
|
||||
- Agents shall never infer or auto-enable optional rule sets from general intent alone.
|
||||
- If an optional rule or rule set cannot be identified and addressed clearly, agents shall stop and ask before proceeding.
|
||||
|
||||
# Prepared Orders
|
||||
|
||||
- An `order` is a prepared prompt for one isolated operation rather than a general workflow or standing rule set.
|
||||
- Orders shall be stored under `prompts/`.
|
||||
- Order files shall use the naming schema `ORDER-0001-<slug>.md`, `ORDER-0002-<slug>.md`, and so on.
|
||||
- The canonical order identifier is the `ORDER-0001` style prefix. The trailing slug is descriptive only.
|
||||
- Recommended internal order file structure is: prompt ID, prompt name, purpose, trigger examples, scope, operation, and expected output.
|
||||
- Orders shall only be executed when they are explicitly requested by a prompt such as `Execute ORDER-0007.` or `Execute ORDER 7.`
|
||||
- Agents may accept an unambiguous short numeric reference such as `ORDER 7` as an alias for `ORDER-0007`.
|
||||
- If an order cannot be identified uniquely and clearly, agents shall stop and ask before proceeding.
|
||||
|
||||
# Toolstack Guides
|
||||
|
||||
Location:
|
||||
|
||||
```text
|
||||
guidance/stacks/
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
- `guidance/stacks/python.md`
|
||||
- `guidance/stacks/typescript.md`
|
||||
- `guidance/stacks/docker.md`
|
||||
- `guidance/stacks/terraform.md`
|
||||
|
||||
Use for:
|
||||
|
||||
- Language or framework expectations.
|
||||
- Tooling and environment conventions.
|
||||
- Build, test, and runtime guidance tied to a specific stack.
|
||||
|
||||
# Coding Conventions
|
||||
|
||||
Location:
|
||||
|
||||
```text
|
||||
guidance/conventions/
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
- `guidance/conventions/naming.md`
|
||||
- `guidance/conventions/testing.md`
|
||||
- `guidance/conventions/review.md`
|
||||
|
||||
Use for:
|
||||
|
||||
- Naming and structure conventions.
|
||||
- Testing expectations.
|
||||
- Code review and quality rules.
|
||||
|
||||
# Recurring Workflows
|
||||
|
||||
Location:
|
||||
|
||||
```text
|
||||
guidance/workflows/
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
- `guidance/workflows/feature-delivery.md`
|
||||
- `guidance/workflows/bugfix.md`
|
||||
- `guidance/workflows/release.md`
|
||||
- `guidance/workflows/incident-response.md`
|
||||
|
||||
Use for:
|
||||
|
||||
- Repeatable task flows.
|
||||
- Checklists for common delivery work.
|
||||
- Operational or maintenance procedures.
|
||||
|
||||
|
||||
<!-- Enable this optional section by removing the outer HTML comment markers from this segment
|
||||
when you want agents to create, update, and consult released-product
|
||||
documentation in `docs/`.
|
||||
|
||||
# Released Product Documentation
|
||||
|
||||
Released-product documentation should live outside the generic sections above.
|
||||
|
||||
Recommended location:
|
||||
|
||||
```text
|
||||
docs/
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
- `docs/readme.md`
|
||||
- `docs/installation.md`
|
||||
- `docs/history.md`
|
||||
|
||||
Agent rules for docs output:
|
||||
|
||||
- Keep content compact but comprehensive.
|
||||
- Write for end users, operators, or other consumers of the released product.
|
||||
- Prefer shipped behavior, supported workflows, and stable terminology over
|
||||
internal implementation detail.
|
||||
- Keep documentation synchronized with released behavior.
|
||||
- Update release history when user-visible changes are shipped.
|
||||
|
||||
Recommended topics:
|
||||
|
||||
- Product overview and intended use.
|
||||
- Installation, configuration, and upgrade guidance.
|
||||
- Usage patterns, operational instructions, and support boundaries.
|
||||
- Compatibility notes, migration notes, and release history.
|
||||
- Troubleshooting and common pitfalls when relevant. -->
|
||||
|
||||
|
||||
<!-- Enable this optional section by removing the outer HTML comment markers from this
|
||||
segment when you want agents to produce and consult workflow output in `process/`.
|
||||
|
||||
# Agent Output In `process/`
|
||||
|
||||
The `process/` directory is primarily for agent output created during
|
||||
delivery, maintenance, and review work.
|
||||
|
||||
Recommended location:
|
||||
|
||||
```text
|
||||
process/
|
||||
```
|
||||
|
||||
Agent rules for process output:
|
||||
|
||||
- Use `process/` for agent-produced artifacts rather than released-product
|
||||
documentation.
|
||||
- Keep entries concise, traceable, and tied to resulting changes.
|
||||
- Treat `process/` as workflow output, not as the primary source of product
|
||||
truth.
|
||||
- Prefer summaries and rationale over raw transcript dumps unless a workflow
|
||||
explicitly requires full prompt history.
|
||||
|
||||
# Agent Change Log
|
||||
|
||||
Location:
|
||||
|
||||
```text
|
||||
process/log.md
|
||||
```
|
||||
|
||||
Use for:
|
||||
|
||||
- Capturing prompts given to agents.
|
||||
- Recording concise explanations of the resulting changes made by agents.
|
||||
- Preserving task-by-task rationale, decisions, and implementation notes.
|
||||
|
||||
# Coding Handbook
|
||||
|
||||
Location:
|
||||
|
||||
```text
|
||||
process/coding-handbook.md
|
||||
```
|
||||
|
||||
Use for:
|
||||
|
||||
- A tutorial-style handbook that explains the programming components used in
|
||||
the project.
|
||||
- Compact but comprehensive technical onboarding material for future
|
||||
contributors.
|
||||
- Written explanations that connect code structure, concepts, and
|
||||
implementation patterns. -->
|
||||
|
||||
|
||||
# Project-Specific Requirements
|
||||
|
||||
|
||||
Project-specific material should live outside the generic sections above.
|
||||
|
||||
Recommended location:
|
||||
|
||||
```text
|
||||
requirements/
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
- `requirements/project.md`
|
||||
- `requirements/architecture.md`
|
||||
- `requirements/decisions.md`
|
||||
- `requirements/domain.md`
|
||||
|
||||
Use for:
|
||||
|
||||
- Product and business requirements.
|
||||
- Project goals and constraints.
|
||||
- Architecture and design decisions.
|
||||
- Domain knowledge that is specific to this repository.
|
||||
|
||||
# Agent-Level Variables
|
||||
|
||||
When present, `requirements/identifiers.yml` is an optional project-specific
|
||||
input that defines agent-level variables for use inside `requirements/` and
|
||||
`guidance/`.
|
||||
|
||||
Variable schema:
|
||||
|
||||
- Use `@{VARIABLE_NAME}` for agent-level variables.
|
||||
- Prefer uppercase snake case names such as `@{PROJECT_ID}` or `@{VENDOR_ID}`.
|
||||
- Do not treat `${...}` as an agent-level variable form; that syntax may appear
|
||||
in Bash or other code and should not be interpreted as agent metadata.
|
||||
|
||||
Scope:
|
||||
|
||||
- The effective scope of `requirements/identifiers.yml` is limited to
|
||||
`requirements/` and `guidance/`.
|
||||
- Definitions from `requirements/identifiers.yml` must not leak into product code.
|
||||
|
||||
Defaults:
|
||||
|
||||
- Default `@{VENDOR_ID}` is `osgw`.
|
||||
- Default `@{PROJECT_ID}` is the current repository directory name.
|
||||
|
||||
Resolution rules:
|
||||
|
||||
- Treat `requirements/identifiers.yml` as optional; when it is absent, agents
|
||||
may still resolve the defaults defined above.
|
||||
- If a variable is used in `requirements/` or `guidance/` and it is not
|
||||
defined in `requirements/identifiers.yml` and does not have a default in this
|
||||
file, agents may stop and report the undefined variable.
|
||||
- Prefer updating duplicated identifier values in `requirements/` and
|
||||
`guidance/` to use the variable schema when that improves consistency.
|
||||
|
||||
# Precedence
|
||||
|
||||
Some precedence levels may be absent because optional levels can remain inside
|
||||
HTML comments. The smaller numeric index wins.
|
||||
|
||||
Apply guidance in this order:
|
||||
|
||||
1. Direct user or task instructions.
|
||||
2. Project-specific documents in `requirements/`.
|
||||
<!-- 3. Released-product documentation in `docs/` when shipped behavior or
|
||||
user-facing expectations are relevant. -->
|
||||
4. Relevant modular guides in `guidance/stacks/`, `guidance/conventions/`, or `guidance/workflows/`.
|
||||
<!-- 5. Agent output in `process/` when prior prompts, rationale, or
|
||||
implementation notes are relevant. -->
|
||||
6. This `AGENTS.md`.
|
||||
|
||||
# Maintenance
|
||||
|
||||
- Keep this file short and stable.
|
||||
- Move detail into dedicated modules when a section becomes too specific or too long.
|
||||
- Add new guideline files only when they solve a recurring need.
|
||||
- Remove outdated references when the repository structure changes.
|
||||
|
||||
# Current Status
|
||||
|
||||
This repository defines the base `AGENTS.md` structure plus project-specific
|
||||
requirements and modular guidance.
|
||||
|
||||
Future project work can add:
|
||||
|
||||
- Reusable modules under `guidance/`
|
||||
- Project-specific documentation under `requirements/`
|
||||
- Optional temporary iteration context in `SCRATCHPAD.md`
|
||||
- Optional released-product documentation under `docs/` by uncommenting its segment
|
||||
- Optional agent output under `process/` by uncommenting its segment
|
||||
- Cross-references from this file once those documents exist
|
||||
14
README.md
14
README.md
@@ -99,6 +99,20 @@ TMDB-backed metadata enrichment requires `TMDB_API_KEY` to be set in the environ
|
||||
|
||||
## Version History
|
||||
|
||||
### 0.3.1
|
||||
|
||||
- debug mode screen titles now append the active Textual screen class name, making screen-specific troubleshooting easier during inspect and edit flows
|
||||
- `--cut` again works as a combined flag/option: omitted disables cutting, bare `--cut` applies the default `60,180`, and explicit duration or `START,DURATION` values stay supported
|
||||
- H.265 unmux commands no longer force an invalid `-f h265` output format, keeping ffmpeg copy extraction aligned with the required Annex B bitstream filter
|
||||
- H.264 encoding now falls back from `libx264` to `libopenh264` with a warning when needed, and the test fixtures use the same encoder fallback so the suite remains portable across ffmpeg builds
|
||||
|
||||
### 0.3.0
|
||||
|
||||
- inspect and edit screens now refresh nested track and pattern changes more reliably, with inspect-mode tables aligned to the target pattern view shown in the differences pane
|
||||
- metadata editing got a follow-up polish pass with clearer ffmpeg notifications, a shared in-screen log pane, safer apply/reload handling, and expanded cleanup and normalization coverage
|
||||
- track and asset probing recognize additional codecs, and the modern test suite now covers more metadata-editor, change-set, screen-state, and asset-probe behavior
|
||||
- Textual now requires version `8.0` or newer to match the UI APIs used by the current screens
|
||||
|
||||
### 0.2.6
|
||||
|
||||
- DB-free `ffx edit` workflow for in-place metadata editing via temporary-file rewrite
|
||||
|
||||
83
SCRATCHPAD.md
Normal file
83
SCRATCHPAD.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# Scratchpad
|
||||
|
||||
## Goal
|
||||
|
||||
- Capture a compact, project-wide list of optimization candidates after a broad scan of the current FFX codebase, tooling, and requirements.
|
||||
|
||||
## Focused Snapshot
|
||||
|
||||
- Highest-leverage application optimizations:
|
||||
- Decide whether placeholder help/settings screens should ship or disappear.
|
||||
- Trim dead helpers and other dormant surface that still looks active.
|
||||
|
||||
- Highest-leverage repo and workflow optimizations:
|
||||
- Continue migrating the oversized legacy test/combinator surface into focused modern tests so it is easier to run, debug, and extend.
|
||||
|
||||
## Optimization Candidates
|
||||
|
||||
1. Placeholder UI surfaces should either ship or disappear
|
||||
- [`src/ffx/help_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/help_screen.py) and [`src/ffx/settings_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/settings_screen.py) are placeholders.
|
||||
- Optimization:
|
||||
- Either remove them from the active UI surface or complete them.
|
||||
- Avoid paying ongoing maintenance cost for unfinished navigation targets.
|
||||
- Expected value:
|
||||
- Leaner interface.
|
||||
- Lower UX ambiguity.
|
||||
|
||||
2. Several helper functions are unfinished or dead-weight
|
||||
- [`src/ffx/helper.py`](/home/osgw/.local/src/codex/ffx/src/ffx/helper.py) contains `permutateList(...): pass`.
|
||||
- There are many combinator and conversion placeholders across tests and migrations.
|
||||
- Optimization:
|
||||
- Remove dead code, finish it, or isolate it behind a clearly dormant area.
|
||||
- Avoid carrying stubbed utility surface that looks reusable but is not.
|
||||
- Expected value:
|
||||
- Smaller mental model.
|
||||
- Less time spent re-evaluating inactive paths.
|
||||
|
||||
3. Test suite shape is expensive to understand and likely expensive to run
|
||||
- The project still carries a large legacy matrix of combinator files under [`tests/legacy`](/home/osgw/.local/src/codex/ffx/tests/legacy), several placeholder `pass` implementations, and at least one suspicious filename with an embedded space: [`tests/legacy/disposition_combinator_2_3 .py`](/home/osgw/.local/src/codex/ffx/tests/legacy/disposition_combinator_2_3 .py).
|
||||
- A first focused replacement slice now exists in [`tests/integration/subtrack_mapping/test_cli_bundle.py`](/home/osgw/.local/src/codex/ffx/tests/integration/subtrack_mapping/test_cli_bundle.py), so the remaining work is migration and consolidation rather than creating the modern test shape from scratch.
|
||||
- Optimization:
|
||||
- Continue replacing broad combinator matrices with focused parametrized integration and unit tests.
|
||||
- Retire the bespoke legacy discovery and runner path once equivalent coverage exists.
|
||||
- Normalize file naming and test discovery conventions.
|
||||
- Expected value:
|
||||
- Faster contributor onboarding.
|
||||
- Easier CI adoption later.
|
||||
|
||||
## Open
|
||||
|
||||
- Durable shipped items have been moved into [`README.md`](/home/osgw/.local/src/codex/ffx/README.md) version history through `0.2.6`.
|
||||
- Should optimization work focus first on operator-perceived latency, internal maintainability, or correctness-risk cleanup that also has performance upside?
|
||||
- Is the long-term supported model still “local Linux workstation plus Textual UI,” or should optimization decisions bias toward a more scriptable/headless CLI?
|
||||
|
||||
## Gaps Right Now
|
||||
|
||||
- No explicit prioritization owner or milestone for the optimization backlog.
|
||||
- No benchmark or timing harness exists for startup, probe, DB, or conversion orchestration overhead.
|
||||
- Repo hygiene is still mixed with generated artifacts and some clearly unfinished files.
|
||||
- The legacy TMDB-backed `Scenario 4` path is currently blocked by a pattern/track regression: `Patterns must define at least one track before they can be stored.` This surfaced while rerunning TMDB-dependent checks after the zero-track pattern hardening.
|
||||
|
||||
## Next
|
||||
|
||||
1. Triage the list into quick wins, medium refactors, and long-horizon cleanup.
|
||||
2. Tackle the cheapest remaining product-surface cleanup first:
|
||||
- placeholder UI surfaces and dead helper cleanup.
|
||||
3. Continue replacing oversized legacy test matrices with focused modern integration and unit coverage.
|
||||
4. Triage the legacy `Scenario 4` pattern/track failure and decide whether to fix the harness, adapt it to the zero-track guard, or retire that path during the ongoing test-suite migration.
|
||||
|
||||
## Delete When
|
||||
|
||||
- Delete this scratchpad once the optimization backlog is either converted into issues/work items or distilled into durable project guidance.
|
||||
|
||||
|
||||
## Missing Timestamps
|
||||
|
||||
Detect ffmpeg warning "Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly" and try autofix by -fflags +genpts -> Warning if fails -> Error. Check if flags collide with anything.
|
||||
|
||||
<!--
|
||||
|
||||
## Source Formats
|
||||
|
||||
-->
|
||||
|
||||
170
docs/file_formats.md
Normal file
170
docs/file_formats.md
Normal file
@@ -0,0 +1,170 @@
|
||||
# File Formats
|
||||
|
||||
This document captures source-file-format notes that complement the normative
|
||||
requirements in `requirements/source_file_formats.md`.
|
||||
|
||||
The first documented format is a Matroska source that carries styled ASS/SSA
|
||||
subtitle streams together with embedded font attachments.
|
||||
|
||||
## Styled ASS In Matroska With Embedded Fonts
|
||||
|
||||
These files are typically `.mkv` releases where subtitle rendering quality
|
||||
depends on keeping both parts of the subtitle package together:
|
||||
|
||||
- one or more subtitle streams with codec `ass`
|
||||
- one or more attachment streams that embed font files used by those subtitles
|
||||
|
||||
This matters because ASS subtitles are not plain text subtitles in the narrow
|
||||
WebVTT sense. They can carry layout, styling, positioning, karaoke, signs, and
|
||||
other typesetting effects. If the matching embedded fonts are lost, consumers
|
||||
can still see subtitle text but the intended styling and sometimes glyph
|
||||
coverage can be degraded.
|
||||
|
||||
For FFX this format is special because the ASS subtitle streams should remain
|
||||
normally editable and mappable, while the related font attachments should be
|
||||
transported unchanged.
|
||||
|
||||
## Observed Sample
|
||||
|
||||
Assessment date: `2026-04-17`
|
||||
|
||||
Observed sample file:
|
||||
|
||||
- `tests/assets/boruto_s01e283_ssa.mkv`
|
||||
|
||||
Commands used for assessment:
|
||||
|
||||
```bash
|
||||
ffprobe tests/assets/boruto_s01e283_ssa.mkv
|
||||
ffprobe -hide_banner -show_format -show_streams -of json tests/assets/boruto_s01e283_ssa.mkv
|
||||
```
|
||||
|
||||
Observed stream layout:
|
||||
|
||||
| Stream index | Kind | Key details |
|
||||
| --- | --- | --- |
|
||||
| `0` | video | `codec_name=h264` |
|
||||
| `1` | audio | `codec_name=aac`, `language=jpn` |
|
||||
| `2` | subtitle | `codec_name=ass`, `language=ger`, default |
|
||||
| `3` | subtitle | `codec_name=ass`, `language=eng` |
|
||||
| `4`-`13` | attachment | `tags.mimetype=font/ttf`, `.ttf` filenames |
|
||||
|
||||
Observed attachment filenames:
|
||||
|
||||
- `AmazonEmberTanuki-Italic.ttf`
|
||||
- `AmazonEmberTanuki-Regular.ttf`
|
||||
- `Arial.ttf`
|
||||
- `Arial Bold.ttf`
|
||||
- `Georgia.ttf`
|
||||
- `Times New Roman.ttf`
|
||||
- `Times New Roman Bold.ttf`
|
||||
- `Trebuchet MS.ttf`
|
||||
- `Verdana.ttf`
|
||||
- `Verdana Bold.ttf`
|
||||
|
||||
Important probe behavior from the real sample:
|
||||
|
||||
- Plain `ffprobe` lists the font streams as `Attachment: none`.
|
||||
- Plain `ffprobe` also prints warnings such as `Could not find codec
|
||||
parameters for stream 4 (Attachment: none): unknown codec` and later
|
||||
`Unsupported codec with id 0 for input stream ...`.
|
||||
- The JSON produced by `FileProperties.FFPROBE_COMMAND_TOKENS`
|
||||
(`ffprobe -hide_banner -show_format -show_streams -of json`) still exposes
|
||||
the attachment streams clearly through `codec_type="attachment"` and the
|
||||
attachment tags.
|
||||
- In that JSON, the attachment streams do not expose `codec_name`.
|
||||
|
||||
This last point is important for FFX: robust detection must not depend on
|
||||
attachment `codec_name` being present.
|
||||
|
||||
## Detection Guidance
|
||||
|
||||
Current known indicators for this format are:
|
||||
|
||||
- one or more subtitle streams with `codec_type="subtitle"` and
|
||||
`codec_name="ass"`
|
||||
- one or more attachment streams with `codec_type="attachment"`
|
||||
- attachment tags that identify embedded fonts, especially
|
||||
`tags.mimetype="font/ttf"`
|
||||
- attachment filenames that end in `.ttf`
|
||||
|
||||
The pattern can vary. FFX should therefore treat the above as a cluster of
|
||||
signals rather than an exact signature tied to one file.
|
||||
|
||||
Inference from the observed sample plus FFmpeg documentation:
|
||||
|
||||
- MIME matching should not be limited to `font/ttf` alone.
|
||||
- The Boruto sample uses `font/ttf`.
|
||||
- FFmpeg's Matroska attachment example uses
|
||||
`mimetype=application/x-truetype-font` for a `.ttf` attachment.
|
||||
- Detection should therefore normalize multiple TTF-like MIME values rather
|
||||
than depend on a single exact string.
|
||||
|
||||
## Processing Expectations In FFX
|
||||
|
||||
The format-specific requirements live in
|
||||
`requirements/source_file_formats.md`. In practical terms, FFX should:
|
||||
|
||||
- recognize the ASS-plus-font-attachment pattern even when attachment probe
|
||||
data is incomplete
|
||||
- tell the operator that the pattern was detected and that special handling is
|
||||
being used
|
||||
- reject sidecar subtitle import for such sources, because converting or
|
||||
replacing these subtitle tracks with ordinary external text subtitles would
|
||||
break the intended subtitle package
|
||||
- continue to allow normal manipulation of the ASS subtitle tracks themselves
|
||||
- preserve the font attachment streams unchanged
|
||||
|
||||
## FFmpeg Notes
|
||||
|
||||
Relevant FFmpeg documentation confirms several behaviors that line up with
|
||||
FFX's needs:
|
||||
|
||||
- FFmpeg documents `-attach` as adding an attachment stream to the output, and
|
||||
explicitly names Matroska fonts used in subtitle rendering as an example.
|
||||
- FFmpeg documents attachment streams as regular streams that are created after
|
||||
the mapped media streams.
|
||||
- FFmpeg documents `-dump_attachment` for extracting attachment streams, which
|
||||
is useful for debugging or validating a source file's embedded fonts.
|
||||
- FFmpeg's Matroska example requires a `mimetype` metadata tag for attached
|
||||
fonts, which is consistent with using attachment tags as detection signals.
|
||||
- FFmpeg also notes that attachments are implemented as codec extradata. That
|
||||
helps explain why probe output for attachment streams can look different from
|
||||
ordinary audio, video, and subtitle streams.
|
||||
|
||||
Implication for FFX:
|
||||
|
||||
- Attachment preservation is not an optional cosmetic feature for this format.
|
||||
It is part of preserving the subtitle package correctly.
|
||||
|
||||
## Jellyfin Notes
|
||||
|
||||
Jellyfin's documentation also supports keeping this format intact:
|
||||
|
||||
- Jellyfin's subtitle compatibility table lists `ASS/SSA` as supported in
|
||||
`MKV` and not supported in `MP4`.
|
||||
- Jellyfin notes that when subtitles must be transcoded, they are either
|
||||
converted to a supported format or burned into the video, and burning them in
|
||||
is the most CPU-intensive path.
|
||||
- Jellyfin's subtitle-extraction example for `SSA/ASS` first dumps attachment
|
||||
streams and then extracts the ASS subtitle stream, which reflects the real
|
||||
relationship between ASS subtitles and embedded fonts in MKV releases.
|
||||
- Jellyfin's font documentation says text-based subtitles require fonts to
|
||||
render properly.
|
||||
- Jellyfin's configuration documentation says the web client uses configured
|
||||
fallback fonts for ASS subtitles when other fonts such as MKV attachments or
|
||||
client-side fonts are not available.
|
||||
|
||||
Inference from the Jellyfin compatibility tables:
|
||||
|
||||
- Keeping this subtitle format in Matroska is the safest interoperability
|
||||
choice for Jellyfin consumers.
|
||||
- Converting the subtitle payload to WebVTT would lose styled ASS behavior.
|
||||
- Dropping the attachment streams would force client or fallback font
|
||||
substitution and can change appearance or glyph coverage.
|
||||
|
||||
## References
|
||||
|
||||
- FFmpeg documentation: https://ffmpeg.org/ffmpeg.html
|
||||
- Jellyfin codec support: https://jellyfin.org/docs/general/clients/codec-support/
|
||||
- Jellyfin configuration and fonts: https://jellyfin.org/docs/general/administration/configuration/
|
||||
28
guidance/workflow/optional/lean-interface-iteration.md
Normal file
28
guidance/workflow/optional/lean-interface-iteration.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Lean Interface Iteration
|
||||
|
||||
Rule set name: `lean-interface-iteration`
|
||||
|
||||
Rule set ID: `LII`
|
||||
|
||||
Status: optional, prompt-activated only
|
||||
|
||||
Trigger examples:
|
||||
|
||||
- `Apply the lean-interface-iteration rules.`
|
||||
- `Apply LII rules.`
|
||||
|
||||
LII-0001: Apply this rule set only when it is explicitly requested in the prompt.
|
||||
|
||||
LII-0002: The target of work under this rule set is the iterated product state for the addressed iteration only.
|
||||
|
||||
LII-0003: Optimize the addressed interface toward the leanest and least complex model that still satisfies the iteration order.
|
||||
|
||||
LII-0004: Backward compatibility, legacy aliases, and compatibility shims are not required unless the prompt explicitly asks to preserve them.
|
||||
|
||||
LII-0005: Prefer one authoritative interface over multiple overlapping parameters, flags, or naming variants.
|
||||
|
||||
LII-0006: Remove or avoid transitional interface layers when they are not required by the addressed iteration order.
|
||||
|
||||
LII-0007: Update affected tests, guidance, requirements, and documentation so they describe the simplified interface model rather than a mixed legacy-and-new model.
|
||||
|
||||
LII-0008: Never change behavior, interfaces, or surrounding areas that are not addressed by the current iteration order.
|
||||
56
guidance/workflow/optional/preparation-script-design.md
Normal file
56
guidance/workflow/optional/preparation-script-design.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Preparation Script Design
|
||||
|
||||
Rule set name: `preparation-script-design`
|
||||
|
||||
Rule set ID: `PSD`
|
||||
|
||||
Status: optional, prompt-activated only
|
||||
|
||||
Trigger examples:
|
||||
|
||||
- `Apply the preparation-script-design rules.`
|
||||
- `Apply PSD rules.`
|
||||
|
||||
PSD-0001: Apply this rule set only when it is explicitly requested in the prompt.
|
||||
|
||||
PSD-0002: Use this rule set for scripts whose purpose is to prepare, verify, or expose a local development or automation environment rather than to perform product runtime behavior.
|
||||
|
||||
PSD-0003: Keep a preparation script focused on environment readiness, dependency installation, local helper exposure, and clear verification output; do not mix unrelated product logic into the script.
|
||||
|
||||
PSD-0004: Design the script to be idempotent so repeated runs converge on the same prepared state without unnecessary reinstallation or destructive side effects.
|
||||
|
||||
PSD-0005: Provide a verification-only mode such as `--check` that reports readiness without installing, modifying, or creating dependencies.
|
||||
|
||||
PSD-0006: Separate component checks from installation steps so the script can report what is missing before or after attempted remediation.
|
||||
|
||||
PSD-0007: Group required capabilities into clear purpose-oriented sections such as support toolchains, local package bundles, generated environment helpers, or other relevant readiness areas instead of presenting one undifferentiated dependency list.
|
||||
|
||||
PSD-0008: Prefer explicit per-component check helpers over opaque one-shot checks so failures remain traceable and easy to extend.
|
||||
|
||||
PSD-0009: Generate or update environment helper files only when they provide a stable, reusable way to expose repo-local or workspace-local tools, paths, or environment variables.
|
||||
|
||||
PSD-0010: Generated environment helper files shall be safe to source multiple times and should avoid duplicating path entries or clobbering unrelated user environment state.
|
||||
|
||||
PSD-0011: When a preparation flow seeds optional user-owned files such as config templates, do so non-destructively by creating them only when absent unless the prompt explicitly requests overwrite behavior.
|
||||
|
||||
PSD-0012: Report status in a concise scan-friendly line format of the shape `[status] Label: detail`, where the label names the checked component and the detail string stays short and specific.
|
||||
|
||||
PSD-0013: Prefer a small canonical status vocabulary in those report lines, with `ok` for satisfied checks, `warn` for non-blocking gaps, and a failure status such as `failed` for blocking or unsuccessful states.
|
||||
|
||||
PSD-0014: When a preparation script uses terminal colors in its status output, apply a consistent severity mapping so `ok` is green, `warn` is yellow, and all other status levels are red.
|
||||
|
||||
PSD-0015: In bracketed status markers such as `[ok]` or `[warn]`, keep the square brackets uncolored and apply the severity color only to the inner status text.
|
||||
|
||||
PSD-0016: Colorized status output shall degrade safely in non-terminal or non-color contexts so the script remains readable and automation-friendly without ANSI support.
|
||||
|
||||
PSD-0017: End with an explicit readiness conclusion that distinguishes between successful preparation, incomplete prerequisites, and failed installation attempts.
|
||||
|
||||
PSD-0018: Installation logic should use the narrowest supported platform-specific package-manager actions necessary for the declared scope and should fail clearly when no supported installation path is available.
|
||||
|
||||
PSD-0019: Treat repo-local helper tooling and local package installation boundaries explicitly rather than assuming global installs, especially when the prepared environment is intended to be reproducible.
|
||||
|
||||
PSD-0020: Keep the script suitable for both interactive local developer use and non-interactive automation checks by avoiding prompts during normal execution unless the prompt explicitly requires interactivity.
|
||||
|
||||
PSD-0021: When a script depends on generated helper files or adjacent validation helpers, update those supporting files only as needed to keep the preparation flow coherent and usable.
|
||||
|
||||
PSD-0022: Verify shell syntax after changes and, when feasible, run a dry readiness check so the resulting preparation flow is validated rather than only written.
|
||||
@@ -1,13 +1,13 @@
|
||||
[project]
|
||||
name = "ffx"
|
||||
description = "FFX recoding and metadata managing tool"
|
||||
version = "0.2.6"
|
||||
version = "0.3.1"
|
||||
license = {file = "LICENSE.md"}
|
||||
dependencies = [
|
||||
"requests",
|
||||
"jinja2",
|
||||
"click",
|
||||
"textual",
|
||||
"textual>=8.0",
|
||||
"sqlalchemy",
|
||||
]
|
||||
readme = {file = "README.md", content-type = "text/markdown"}
|
||||
|
||||
98
requirements/architecture.md
Normal file
98
requirements/architecture.md
Normal file
@@ -0,0 +1,98 @@
|
||||
# Architecture
|
||||
|
||||
## Architecture Goals
|
||||
|
||||
- Keep the tool small, local, and easy to reason about.
|
||||
- Separate media inspection, stored normalization rules, and conversion execution clearly enough that users can inspect and adjust behavior.
|
||||
- Favor explicit local state and deterministic rule application over opaque automation.
|
||||
- Make external runtime dependencies and platform assumptions visible.
|
||||
|
||||
## System Context
|
||||
|
||||
- Primary actors:
|
||||
- Local operator running the CLI.
|
||||
- Local operator using the Textual TUI to inspect files and maintain rules.
|
||||
- External systems:
|
||||
- `ffprobe` for media introspection.
|
||||
- `ffmpeg` for conversion and extraction.
|
||||
- TMDB API for optional show and episode metadata.
|
||||
- Local filesystem for source media, generated outputs, subtitles, logs, config, and database files.
|
||||
- Data entering the system:
|
||||
- Media container and stream metadata from source files.
|
||||
- Regex patterns and per-show normalization rules entered in the TUI.
|
||||
- Optional config values from `~/.local/etc/ffx.json`.
|
||||
- Optional TMDB identifiers and CLI overrides.
|
||||
- Optional external subtitle files.
|
||||
- Data leaving the system:
|
||||
- Normalized output media files.
|
||||
- Extracted stream files from unmux operations.
|
||||
- SQLite rows representing shows, patterns, tracks, tags, shifted seasons, and properties.
|
||||
- Local log output and console messages.
|
||||
|
||||
## High-Level Building Blocks
|
||||
|
||||
- Frontend, CLI, API, or worker:
|
||||
- A Click-based CLI in [`src/ffx/cli.py`](/home/osgw/.local/src/codex/ffx/src/ffx/cli.py), exposed as the `ffx` command and via `python -m ffx`, including lightweight maintenance wrappers for bundle setup, workstation preparation, and upgrade tasks.
|
||||
- A Textual terminal UI rooted in [`src/ffx/ffx_app.py`](/home/osgw/.local/src/codex/ffx/src/ffx/ffx_app.py) with screens for shows, patterns, file inspection, tracks, tags, and shifted seasons.
|
||||
- Core business logic:
|
||||
- Descriptor objects model media files, shows, and tracks.
|
||||
- Controllers encapsulate CRUD operations and workflow orchestration for shows, patterns, tags, tracks, season shifts, configuration, and conversion.
|
||||
- `MediaDescriptorChangeSet` computes differences between a file and its stored target schema to drive metadata and disposition updates.
|
||||
- File inspection caches combined `ffprobe` data and crop-detection results per source and sampling window within one process to avoid repeated subprocess work.
|
||||
- Storage:
|
||||
- SQLite via SQLAlchemy ORM, with schema rooted in shows, patterns, tracks, media tags, track tags, shifted seasons, and generic properties.
|
||||
- Ordered schema migrations are loaded dynamically from per-version-step modules under [`src/ffx/model/migration/`](/home/osgw/.local/src/codex/ffx/src/ffx/model/migration/).
|
||||
- A configuration JSON file supplies optional path, metadata-filtering, and filename-template settings.
|
||||
- Integration adapters:
|
||||
- Process execution wrapper for `ffmpeg`, `ffprobe`, `nice`, and `cpulimit`, with explicit disabled states for niceness and CPU limiting, support for both absolute `cpulimit` values and machine-wide percent input, and a combined `cpulimit -- nice -n ... <command>` execution shape when both limits are configured.
|
||||
- HTTP adapter for TMDB via `requests`.
|
||||
|
||||
## Data And Interface Notes
|
||||
|
||||
- Key entities or records:
|
||||
- `Show`: canonical TV show metadata plus digit-formatting rules, optional show-level notes, and an optional show-level encoding-quality fallback.
|
||||
- `Pattern`: regex rule tying filenames to one show and one target media schema.
|
||||
- `Track` and `TrackTag`: persisted target stream records, codec, dispositions, audio layout, and stream-level tags. Detailed source-to-target mapping rules live in `requirements/subtrack_mapping.md`.
|
||||
- `MediaTag`: persisted container-level metadata for a pattern.
|
||||
- `ShiftedSeason`: mapping from source numbering ranges to adjusted season and episode numbers, owned either by a show as fallback or by a pattern as override.
|
||||
- `Property`: internal key-value storage currently used for database versioning.
|
||||
- External interfaces:
|
||||
- CLI commands for conversion, inspection, extraction, and crop detection.
|
||||
- TUI workflows for rule authoring and rule maintenance.
|
||||
- Environment variable `TMDB_API_KEY` for TMDB access.
|
||||
- Config keys `databasePath`, `logDirectory`, and `outputFilenameTemplate`, plus optional metadata-filter rules.
|
||||
- Validation rules:
|
||||
- Only supported media-file extensions are accepted for conversion.
|
||||
- Stored database version must either match the runtime-required version already or have a supported sequential migration path to it.
|
||||
- A normalized descriptor may have at most one default and one forced stream per relevant track type.
|
||||
- Shifted-season ranges are intended not to overlap within the same owner scope and season, and runtime resolution prefers pattern-owned matches over show-owned matches.
|
||||
- TMDB lookups require a show ID and season and episode numbers.
|
||||
- Error-handling approach:
|
||||
- User-facing operational failures are raised as `click.ClickException` or warnings.
|
||||
- Ambiguous default and forced stream states trigger prompts unless `--no-prompt` is set, in which case the command fails fast.
|
||||
- External-process failures and invalid media are surfaced through logs and command errors rather than retries, except for TMDB rate-limit retries.
|
||||
|
||||
## Deployment And Operations
|
||||
|
||||
- Runtime environment:
|
||||
- Local Python environment with the package installed and `ffmpeg`, `ffprobe`, `nice`, and `cpulimit` available on `PATH`.
|
||||
- Deployment shape:
|
||||
- Single-process command execution on demand; no daemon, queue, or network service of its own.
|
||||
- Secrets and configuration handling:
|
||||
- TMDB secret is read from `TMDB_API_KEY`.
|
||||
- User config is read from `~/.local/etc/ffx.json`.
|
||||
- Database path may also be overridden per command via `--database-file`.
|
||||
- Logging and monitoring approach:
|
||||
- File and console logging configured per invocation.
|
||||
- Default log file path is `~/.local/var/log/ffx.log`.
|
||||
- No dedicated monitoring integration is present.
|
||||
|
||||
## Open Technical Questions
|
||||
|
||||
- Question: Should Linux-specific assumptions such as `/dev/null`, `nice`, `cpulimit`, and `~/.local` remain part of the supported-platform contract?
|
||||
- Risk: Portability and operational behavior are underspecified for non-Linux environments.
|
||||
- Next decision needed: Either document Linux-like systems as the official support boundary or refactor the process and path handling for broader portability.
|
||||
|
||||
- Question: Should placeholder TUI surfaces such as settings and help become part of the required product surface or stay explicitly out of scope?
|
||||
- Risk: The UI appears broader than the actually finished feature set.
|
||||
- Next decision needed: Either remove or complete placeholder screens and update requirements accordingly.
|
||||
211
requirements/metadata_editor.md
Normal file
211
requirements/metadata_editor.md
Normal file
@@ -0,0 +1,211 @@
|
||||
# Metadata Editor
|
||||
|
||||
This file defines the requirements for a database-free interactive metadata
|
||||
editor command derived from the current file-inspection UI.
|
||||
|
||||
Feasibility from the current codebase: yes, with a moderate refactor.
|
||||
|
||||
The strongest reusable pieces already exist:
|
||||
|
||||
- `ffprobe`-backed media probing through `FileProperties` and `MediaDescriptor`
|
||||
- descriptor-level metadata and disposition mutation through `MediaDescriptor`
|
||||
and `TrackDescriptor`
|
||||
- diff and ffmpeg token generation through `MediaDescriptorChangeSet`
|
||||
- stream-copy remux execution through `FfxController` with `VideoEncoder.COPY`
|
||||
- reusable tag and track edit dialogs in the Textual UI
|
||||
|
||||
The main missing pieces are:
|
||||
|
||||
- a CLI bootstrap path that does not initialize SQLite
|
||||
- a probe-only path that does not instantiate database-backed controllers
|
||||
- a clean separation between original file state and editable draft state
|
||||
- a safe temporary-output and replace workflow for writing changes back to the
|
||||
same file path
|
||||
|
||||
## Scope
|
||||
|
||||
- One new command: `ffx edit <file>`
|
||||
- One-file interactive editing through a Textual screen derived from
|
||||
`MediaDetailsScreen`
|
||||
- Editing container-level metadata and per-stream metadata already visible in
|
||||
the application
|
||||
- Editing stream dispositions that are represented as metadata-like output
|
||||
state, especially `default` and `forced`
|
||||
- Writing the result back to the original file path through a temporary output
|
||||
file and replace step
|
||||
|
||||
## Out Of Scope
|
||||
|
||||
- SQLite reads, writes, migrations, or pattern matching
|
||||
- TMDB lookups, show selection, pattern selection, or shifted-season logic
|
||||
- Batch editing multiple files in one command invocation
|
||||
- Video or audio transcoding
|
||||
- Container changes, filename changes, or rename workflows
|
||||
- Stream add, stream delete, stream reorder, or stream substitution from
|
||||
external files in the first release
|
||||
- Editing technical stream identity such as codec, stream type, source index,
|
||||
or audio layout in the first release
|
||||
- Chapter editing
|
||||
|
||||
## Terms
|
||||
|
||||
- `baseline descriptor`: immutable in-memory representation of the file as last
|
||||
probed from disk
|
||||
- `draft descriptor`: mutable in-memory representation of the desired output
|
||||
state
|
||||
- `edit mode`: the database-free TUI mode used by `ffx edit`
|
||||
- `planned changes`: user-visible summary of the differences between baseline
|
||||
and draft plus any configured cleanup actions
|
||||
- `temporary output file`: the write target used before replacing the original
|
||||
file path
|
||||
|
||||
## Rules
|
||||
|
||||
- `METADATA_EDITOR-0001`: The system shall provide a command `ffx edit <file>`
|
||||
that requires exactly one existing media file path and opens an interactive
|
||||
Textual editor for that file.
|
||||
- `METADATA_EDITOR-0002`: `ffx edit` shall not initialize SQLite, shall not
|
||||
open the configured database file, shall not prompt for database migration,
|
||||
and shall not instantiate any controller that depends on `context['database']`.
|
||||
- `METADATA_EDITOR-0003`: `ffx edit` may still read configuration and logging
|
||||
settings from `~/.local/etc/ffx.json`, but any global database option shall
|
||||
have no effect on this command's behavior.
|
||||
- `METADATA_EDITOR-0004`: Edit mode shall be derived from the current
|
||||
`MediaDetailsScreen` behavior and layout where practical, but all DB-only UI
|
||||
elements and actions such as show selection, pattern input, and pattern CRUD
|
||||
actions shall be hidden, disabled, or replaced.
|
||||
- `METADATA_EDITOR-0005`: Edit mode shall keep the baseline descriptor and the
|
||||
draft descriptor as separate objects. Editing actions shall mutate only the
|
||||
draft descriptor until the operator explicitly applies changes.
|
||||
- `METADATA_EDITOR-0006`: The application shall keep raw metadata values
|
||||
separate from rendered labels. Rich or Textual markup may be used for
|
||||
presentation, but it shall never be stored in descriptor state, reused as
|
||||
source data, or written into the media file.
|
||||
- `METADATA_EDITOR-0007`: The planned-changes view shall compare the baseline
|
||||
descriptor with the draft descriptor using `MediaDescriptorChangeSet` or an
|
||||
equivalent descriptor-diff mechanism. It shall no longer mean `file -> db`.
|
||||
- `METADATA_EDITOR-0008`: The editor shall support container-tag add, edit, and
|
||||
delete operations on the draft descriptor.
|
||||
- `METADATA_EDITOR-0009`: The editor shall support per-stream metadata edit
|
||||
operations on the draft descriptor, including at least language, title, and
|
||||
arbitrary stream tag key-value pairs.
|
||||
- `METADATA_EDITOR-0010`: The editor shall support setting and clearing
|
||||
`default` and `forced` dispositions in the draft descriptor, while enforcing
|
||||
that there is at most one `default` and at most one `forced` stream per track
|
||||
type.
|
||||
- `METADATA_EDITOR-0011`: The first released editor scope shall treat technical
|
||||
stream structure as immutable. A user shall not be able to change stream
|
||||
count, output order, codec, track type, audio layout, or source-index
|
||||
mapping through `ffx edit`.
|
||||
- `METADATA_EDITOR-0012`: The track-edit UI used in edit mode shall therefore
|
||||
expose only metadata fields and supported disposition fields. Structural
|
||||
fields that are editable in pattern-authoring workflows shall be read-only or
|
||||
absent in edit mode.
|
||||
- `METADATA_EDITOR-0013`: The command shall write changes through an ffmpeg
|
||||
stream-copy remux workflow only. No transcoding shall be performed as part of
|
||||
`ffx edit`.
|
||||
- `METADATA_EDITOR-0013A`: The ffmpeg invocation used by `ffx edit` shall map
|
||||
all source streams with `-map 0` and shall copy all mapped streams with a
|
||||
single `-c copy`. It shall not emit conversion-style per-stream `-map` or
|
||||
`-c:*` options that could drop, reorder, or transcode streams during a
|
||||
metadata-only edit.
|
||||
- `METADATA_EDITOR-0014`: Because ffmpeg cannot rewrite the source file in
|
||||
place, `ffx edit` shall write to a temporary output file on the same
|
||||
filesystem as the source file and shall replace the original path only after
|
||||
ffmpeg reports success.
|
||||
- `METADATA_EDITOR-0015`: The temporary output path shall preserve the original
|
||||
container type and file extension. The feature shall not silently change the
|
||||
container or extension during a metadata-only edit.
|
||||
- `METADATA_EDITOR-0016`: If the rewrite step fails, the original file shall
|
||||
remain untouched. The system shall not leave the user with a partially
|
||||
replaced source file.
|
||||
- `METADATA_EDITOR-0017`: After a successful replace, the application shall
|
||||
reprobe the rewritten file, refresh the baseline descriptor from disk, reset
|
||||
the draft state to that fresh baseline, and clear the dirty state.
|
||||
- `METADATA_EDITOR-0018`: Edit mode shall track whether unsaved draft changes
|
||||
exist and shall require confirmation before dismissing the screen or quitting
|
||||
the app when such changes would be lost.
|
||||
- `METADATA_EDITOR-0019`: Edit mode shall not inject conversion-only encoding
|
||||
metadata such as encoder quality or preset markers.
|
||||
- `METADATA_EDITOR-0020`: Signature-tag behavior shall be explicit for
|
||||
metadata-only editing. The default behavior shall not add a misleading
|
||||
recoding-style signature to a file that was only remuxed for metadata
|
||||
updates.
|
||||
- `METADATA_EDITOR-0021`: Configured metadata-removal rules from the local
|
||||
configuration shall be surfaced clearly in the UI and in the planned-changes
|
||||
view. If those rules are applied during save, the operator shall be able to
|
||||
tell that the file will be cleaned in addition to any manual edits.
|
||||
- `METADATA_EDITOR-0022`: Edit mode shall provide an in-screen operator toggle
|
||||
for config-driven cleanup so a user can switch between pure manual metadata
|
||||
edits and metadata edits plus configured tag cleanup without leaving the
|
||||
editor.
|
||||
- `METADATA_EDITOR-0023`: The existing global `--dry-run` behavior shall apply
|
||||
to `ffx edit`. In dry-run mode the command shall not replace the original
|
||||
file and shall expose the planned write operation clearly enough for the user
|
||||
to understand what would happen.
|
||||
- `METADATA_EDITOR-0024`: Every ffmpeg invocation performed by `ffx edit`
|
||||
shall be surfaced to the operator as a notification in the edit UI.
|
||||
- `METADATA_EDITOR-0025`: When application verbosity is greater than zero, the
|
||||
notification for an `ffx edit` ffmpeg invocation shall include the concrete
|
||||
ffmpeg command line.
|
||||
|
||||
## Acceptance
|
||||
|
||||
- `ffx edit /path/to/file.mkv` opens successfully on a workstation where the
|
||||
configured database is missing, empty, incompatible, or intentionally
|
||||
inaccessible.
|
||||
- Opening a file in edit mode does not trigger database bootstrap or migration
|
||||
prompts.
|
||||
- A user can change a container tag, save, and see the rewritten file at the
|
||||
same path with the updated metadata.
|
||||
- A user can change a stream title or language, save, and see the rewritten
|
||||
file at the same path with the updated stream metadata.
|
||||
- A user can change `default` or `forced` on a track, save, and see the
|
||||
rewritten file at the same path with the updated dispositions.
|
||||
- The planned-changes view reflects manual edits relative to the original file
|
||||
and, when enabled, any configured cleanup removals.
|
||||
- No rendered Rich or Textual color markup appears in the saved file metadata.
|
||||
- Saving metadata with files that contain PGS subtitle tracks or other
|
||||
non-text subtitle codecs preserves those streams instead of dropping them.
|
||||
- If ffmpeg fails while saving, the original file remains present and readable
|
||||
at the original path.
|
||||
- In dry-run mode, the original file remains untouched.
|
||||
|
||||
## Current Code Fit
|
||||
|
||||
- Good fit:
|
||||
- `FfxController.runJob(...)` already has a `VideoEncoder.COPY` path that
|
||||
can remux streams and apply metadata and disposition tokens.
|
||||
- `MediaDescriptorChangeSet` already computes container-tag, stream-tag, and
|
||||
disposition differences and can generate ffmpeg metadata tokens.
|
||||
- `TagDetailsScreen` and `TrackDetailsScreen` already provide reusable edit
|
||||
dialogs for draft state.
|
||||
- `PatternDetailsScreen` already demonstrates add, edit, and delete flows for
|
||||
tags and tracks in a draft-first UI.
|
||||
- Refactor required:
|
||||
- `ffx` CLI initialization currently creates a database context for all
|
||||
non-lightweight commands, so `edit` needs its own DB-free bootstrap path.
|
||||
- `FileProperties` currently instantiates `PatternController` eagerly, so
|
||||
probing must be split from pattern matching or made lazy.
|
||||
- `MediaDetailsScreen` currently assumes `command == 'inspect'` and mixes
|
||||
file state with database-backed target-pattern state.
|
||||
- `MediaDetailsScreen` currently mutates the probed source descriptor
|
||||
directly. Edit mode needs an immutable baseline descriptor and a separate
|
||||
mutable draft descriptor.
|
||||
- `TrackDetailsScreen` currently exposes structural fields that are valid for
|
||||
pattern authoring but too dangerous for metadata-only file editing.
|
||||
|
||||
## Risks
|
||||
|
||||
- Container-level metadata support differs across formats, so some requested tag
|
||||
changes may not round-trip identically through ffmpeg for every supported
|
||||
container.
|
||||
- The existing metadata-removal implementation is conversion-oriented and may
|
||||
remove tags more aggressively than a user expects from a manual editor unless
|
||||
cleanup policy is made explicit.
|
||||
- The current codebase lacks a dedicated descriptor clone API, so draft-state
|
||||
separation should be implemented deliberately instead of via accidental shared
|
||||
references.
|
||||
- Replacing a file path with a temporary output changes inode identity, so any
|
||||
future requirement around preserving timestamps, hard links, or extended
|
||||
attributes would need additional explicit handling.
|
||||
68
requirements/pattern_management.md
Normal file
68
requirements/pattern_management.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# Pattern Management
|
||||
|
||||
This file defines the behavioral contract for managing shows, patterns, and
|
||||
pattern-backed filename matching.
|
||||
|
||||
Primary source: actual tool code in `src/ffx/`.
|
||||
Secondary source: operator intent captured in task discussion.
|
||||
|
||||
## Scope
|
||||
|
||||
- The show, pattern, and track hierarchy stored in SQLite.
|
||||
- The role of a pattern as a reusable normalization definition for related media files.
|
||||
- Filename-driven assignment of a scanned media file to one show through one matching pattern.
|
||||
- Duplicate-match handling when more than one pattern matches the same filename.
|
||||
|
||||
## Terms
|
||||
|
||||
- `show`: logical series identity such as one TV show entry in the database.
|
||||
- `pattern`: regex-backed normalization definition attached to one show.
|
||||
- `track`: one persisted target-track definition attached to one pattern.
|
||||
- `scanned media file`: one source file currently being inspected or converted.
|
||||
- `duplicate pattern match`: a filename state where more than one stored pattern matches the same scanned media file.
|
||||
- `pattern-backed target schema`: the combination of one pattern's stored media tags and stored track definitions.
|
||||
|
||||
## Rules
|
||||
|
||||
- `PATTERN_MANAGEMENT-0001`: The domain model shall treat a show as the parent entity for patterns that describe distinct release families or normalization schemas for that show. A show may temporarily exist without patterns during editing or initial TUI creation.
|
||||
- `PATTERN_MANAGEMENT-0002`: Each persisted pattern shall belong to exactly one show.
|
||||
- `PATTERN_MANAGEMENT-0003`: The domain model shall treat a pattern as the reusable normalization definition for a series of media files expected to share the same internal track layout and materially similar stream and container metadata.
|
||||
- `PATTERN_MANAGEMENT-0004`: Each persisted track definition shall belong to exactly one pattern.
|
||||
- `PATTERN_MANAGEMENT-0005`: A pattern may also carry pattern-level media tags. The pattern's media tags plus its track definitions together form the pattern-backed target schema.
|
||||
- `PATTERN_MANAGEMENT-0006`: A scanned media file shall resolve to at most one pattern and therefore at most one show.
|
||||
- `PATTERN_MANAGEMENT-0007`: If no pattern matches a filename, the file shall remain unmatched rather than being assigned implicitly.
|
||||
- `PATTERN_MANAGEMENT-0008`: If more than one pattern matches the same filename, the system shall raise a duplicate pattern match error instead of silently selecting one.
|
||||
- `PATTERN_MANAGEMENT-0009`: Duplicate-match detection shall apply regardless of whether the competing patterns belong to the same show or to different shows.
|
||||
- `PATTERN_MANAGEMENT-0010`: Exact duplicate pattern definitions for the same show should not create multiple persisted pattern rows.
|
||||
- `PATTERN_MANAGEMENT-0011`: A persisted pattern shall define one or more tracks. Creating or retaining a zero-track pattern in the database is invalid managed state and shall be prohibited.
|
||||
- `PATTERN_MANAGEMENT-0012`: A show may exist without patterns as an intermediate editing state, for example when a user creates the show first in the TUI and adds patterns later.
|
||||
- `PATTERN_MANAGEMENT-0013`: Operator-facing pattern management should expose the owning show, regex pattern, stored track set, and stored media-tag set so a user can reason about matching and normalization behavior.
|
||||
- `PATTERN_MANAGEMENT-0014`: Matching semantics shall be deterministic and documented. Implicit "last matching pattern wins" behavior is not acceptable released behavior.
|
||||
|
||||
## Acceptance
|
||||
|
||||
- A filename that matches exactly one pattern yields one matched pattern and one show identity.
|
||||
- A filename that matches no pattern yields no matched pattern and an unmatched state.
|
||||
- A filename that matches more than one pattern yields an explicit duplicate-match error.
|
||||
- A pattern-backed target schema can be reconstructed from one pattern's stored media tags and stored track definitions.
|
||||
- A show may be stored before any patterns are attached to it.
|
||||
- A pattern cannot be stored or retained as a valid managed pattern unless at least one track is defined for it.
|
||||
- Pattern-backed conversion never proceeds with two competing matching patterns for the same input filename.
|
||||
|
||||
## Current Code Fit
|
||||
|
||||
- `src/ffx/model/show.py` implements a one-to-many `Show -> Pattern` relationship.
|
||||
- `src/ffx/model/pattern.py` implements `Pattern.show_id`, a one-to-many `Pattern -> Track` relationship, a one-to-many `Pattern -> MediaTag` relationship, and a unique `(show_id, pattern)` constraint for freshly created databases.
|
||||
- `src/ffx/model/track.py` implements `Track.pattern_id`, so each persisted track belongs to one pattern.
|
||||
- `src/ffx/model/pattern.py` reconstructs a pattern-backed target schema through `Pattern.getMediaDescriptor(...)`, combining stored media tags and stored tracks.
|
||||
- `src/ffx/file_properties.py` assumes a scanned file resolves to at most one pattern, because it stores only one `self.__pattern` and derives one `show_id` from it.
|
||||
- `src/ffx/pattern_controller.py` prevents exact duplicate `(show_id, pattern)` definitions during create and update flows, and it refreshes cached compiled regexes when stored pattern expressions change.
|
||||
- `src/ffx/pattern_controller.py` now complies with duplicate-match safety. `matchFilename(...)` scans deterministically, returns exactly one match, returns `{}` for no match, and raises an explicit duplicate-pattern-match error when more than one pattern matches the same filename.
|
||||
- The current persistence layer already aligns with the intended empty-show workflow because a show can exist without patterns.
|
||||
- New pattern creation and schema replacement flows now require at least one track, and `TrackController.deleteTrack(...)` prevents deleting the last persisted track from a pattern.
|
||||
- Trackless legacy rows can still exist in preexisting databases, but matching now rejects them explicitly instead of letting them participate silently.
|
||||
|
||||
## Risks
|
||||
|
||||
- The intended "release family" meaning of a pattern is a domain assumption, not something the code verifies automatically across all files matching that pattern.
|
||||
- Preexisting databases created before the newer validation rules may still contain invalid rows, so upgrade and cleanup paths should continue to treat explicit validation failures as recoverable operator signals.
|
||||
124
requirements/project.md
Normal file
124
requirements/project.md
Normal file
@@ -0,0 +1,124 @@
|
||||
## Purpose And Scope
|
||||
|
||||
- Project name: FFX
|
||||
- User problem: TV episode files from mixed sources arrive with inconsistent codecs, stream metadata, subtitle layouts, season and episode numbering, and output filenames, which makes them awkward to archive and use in media-player applications.
|
||||
- Target users: Individual operators curating a local TV media library on a workstation, especially users willing to define normalization rules per show.
|
||||
- Success outcome: A user can inspect source files, define reusable show and pattern rules, and produce output files whose streams, metadata, and filenames follow a predictable schema for web playback and library import.
|
||||
- Out of scope:
|
||||
- Multi-user or hosted service workflows.
|
||||
- General movie-library management.
|
||||
- Distributed transcoding or remote job orchestration.
|
||||
- Broad media-server administration beyond file preparation.
|
||||
|
||||
## Required Product
|
||||
|
||||
- Deliverable type: Installable Python command-line application with a Textual terminal UI for inspection and rule editing.
|
||||
- Core capabilities:
|
||||
- Maintain an SQLite-backed database of shows, filename-matching patterns, per-pattern stream layouts and metadata tags, and optional season-shift rules.
|
||||
- Inspect existing media files through `ffprobe` and compare discovered stream metadata with stored normalization rules.
|
||||
- Convert media files through `ffmpeg` into a normalized output layout, including video recoding, audio transcoding to Opus, metadata cleanup and rewrite, and controlled disposition flags.
|
||||
- Build output filenames from detected or configured show, season, and episode information, optionally enriched from TMDB and a configurable Jinja-style filename template.
|
||||
- Support auxiliary file operations such as subtitle import, unmuxing, crop detection, rename-only conversion runs, and direct in-place episode renaming.
|
||||
- Supported environments:
|
||||
- Local execution on a Python-capable workstation.
|
||||
- Best-supported on Linux-like systems because the implementation assumes `~/.local`, `/dev/null`, `nice`, and `cpulimit`.
|
||||
- Requires `ffmpeg`, `ffprobe`, and `cpulimit` on `PATH`.
|
||||
- Operational owner: The local user running the tool and maintaining its config, database, and external tooling.
|
||||
|
||||
## Suggested User Stories
|
||||
|
||||
- As a library maintainer, I want to define show-specific matching rules once so that future source files can be normalized automatically.
|
||||
- As an operator, I want to inspect a file before conversion so that I can compare its actual streams and tags against the stored target schema.
|
||||
- As a user preparing web-playback files, I want to recode video and audio with a small set of predictable options so that results are compatible and consistently named.
|
||||
- As a user dealing with nonstandard releases, I want CLI overrides for language, title, stream order, default and forced tracks, and season and episode data so that one-off fixes do not require database edits first.
|
||||
- As a user importing anime or other shifted numbering schemes, I want season and episode offsets at the show level with optional pattern-specific overrides so that generated filenames align with TMDB and media-library expectations.
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
- The system shall provide a CLI entrypoint named `ffx` with commands for `convert`, `inspect`, `shows`, `rename`, `unmux`, `cropdetect`, `setup`, `configure_workstation`, `upgrade`, `version`, and `help`.
|
||||
- The system shall support a two-step local installation and preparation flow:
|
||||
- `tools/setup.sh` is the bootstrap entrypoint for the first step and shall own bundle virtualenv creation, package installation, shell alias exposure, and optional Python test-package installation.
|
||||
- `tools/configure_workstation.sh` is the bootstrap entrypoint for the second step and shall own workstation dependency checks and installation plus local config and directory seeding.
|
||||
- After the bundle is installed, `ffx setup` and `ffx configure_workstation` shall remain aligned wrapper entrypoints for those same two steps.
|
||||
- The CLI command `ffx setup` shall act as a wrapper for the first-step bundle-preparation flow in `tools/setup.sh`.
|
||||
- The CLI command `ffx configure_workstation` shall act as a wrapper for the second-step preparation flow in `tools/configure_workstation.sh`.
|
||||
- The system shall persist reusable normalization rules in SQLite for:
|
||||
- shows and show formatting digits,
|
||||
- optional show-level notes,
|
||||
- optional show-level quality defaults,
|
||||
- regex-based filename patterns,
|
||||
- per-pattern media tags,
|
||||
- per-pattern stream definitions,
|
||||
- show-level and pattern-level shifted-season mappings,
|
||||
- internal database version properties.
|
||||
- The system shall apply supported ordered database migrations automatically when opening an older local database file and shall fail fast when no supported path exists.
|
||||
- Before applying a required database migration, the system shall show the current version, target version, required sequential steps, and whether each corresponding migration module is present, then require user confirmation.
|
||||
- Before applying a confirmed file-backed database migration, the system shall create an in-place backup copy whose filename includes the covered version range.
|
||||
- Detailed show, pattern, and duplicate-match management rules live in `requirements/pattern_management.md`.
|
||||
- The system shall inspect source media using `ffprobe` and derive a structured description of container metadata and streams.
|
||||
- The system shall optionally open a Textual UI to browse shows, inspect files, and create, edit, or delete shows, patterns, stream definitions, tags, and shifted-season rules.
|
||||
- The system shall match filenames against stored regex patterns to decide whether an input file should inherit a target stream and metadata schema.
|
||||
- The system shall convert supported input files (`mkv`, `mp4`, `avi`, `flv`, `webm`) with `ffmpeg`, supporting at least:
|
||||
- VP9, AV1, and H.264 video encoding,
|
||||
- Opus audio encoding with bitrate selection based on channel layout,
|
||||
- metadata and disposition rewriting,
|
||||
- optional crop detection and crop application,
|
||||
- optional deinterlacing and denoising,
|
||||
- optional subtitle import from external files,
|
||||
- rename-only move mode.
|
||||
- The system shall support optional TMDB lookups to resolve show names, years, and episode titles when a show ID, season, and episode are available.
|
||||
- The system shall generate output filenames from show metadata, season and episode indices, and episode names using the configured filename template.
|
||||
- The system shall allow CLI overrides for stream languages, stream titles, default and forced tracks, stream order, TMDB show and episode data, output directory, label prefix, and processing resource limits.
|
||||
- The system shall resolve encoding quality by precedence `CLI override -> pattern -> show -> encoder default` and shall report the chosen value and source.
|
||||
- The system shall resolve season shifting by precedence `pattern -> show -> identity default` and shall report the chosen mapping and source.
|
||||
- Processing resource limit rules:
|
||||
- `--nice` shall accept niceness values from `-20` through `19`; omitting the option shall disable niceness adjustment.
|
||||
- `--cpu` shall accept either a positive absolute `cpulimit` value such as `200`, or a percentage suffixed with `%` such as `25%` to represent a share of present CPUs; omitting the option or using `0` shall disable CPU limiting.
|
||||
- When both limits are configured, the process wrapper shall execute the target command through `cpulimit` around a `nice -n ...` invocation so both limits apply to the launched media command.
|
||||
- The system shall support extracting streams into separate files via `unmux` and reporting suggested crop parameters via `cropdetect`.
|
||||
- The system shall support in-place episode renaming via `rename`, requiring a `--prefix`, accepting optional `--season` and `--suffix` overrides, preserving the source extension, and supporting dry-run output without moving files.
|
||||
- Crop detection shall use a configurable sampling window, defaulting to a 60-second seek and a 180-second analysis duration, and repeated crop-detection requests for the same source plus sampling window shall reuse cached results within one process.
|
||||
- The system shall handle invalid input and system failures gracefully by logging warnings or raising `click` errors for missing files, invalid media, missing TMDB credentials, incompatible database versions, and ambiguous track dispositions when prompting is disabled.
|
||||
|
||||
## Quality Requirements
|
||||
|
||||
- The system should stay understandable as a small local tool: controllers, descriptors, models, and screens should remain separate enough for contributors to trace a workflow end to end.
|
||||
- The system should produce predictable output for the same database rules, CLI overrides, and source files.
|
||||
- The system should preserve a lightweight operational footprint: local SQLite state, local log file, no mandatory background services.
|
||||
- The system should be testable through modern automatically discovered tests and through remaining legacy harness coverage during migration.
|
||||
- The system should expose enough logging to diagnose failed probes, failed conversions, and rule mismatches without requiring a debugger.
|
||||
|
||||
## Constraints And Assumptions
|
||||
|
||||
- Technology constraints:
|
||||
- Python package built with setuptools.
|
||||
- Primary libraries: `click`, `textual`, `sqlalchemy`, `jinja2`, `requests`.
|
||||
- Conversion and inspection rely on external executables rather than pure-Python media libraries.
|
||||
- Hosting or infrastructure constraints:
|
||||
- Intended for local execution, not server deployment.
|
||||
- Stores default state in `~/.local/etc/ffx.json`, `~/.local/var/ffx/ffx.db`, and `~/.local/var/log/ffx.log`.
|
||||
- Timeline constraints:
|
||||
- The current implemented scope reflects a compact alpha release stream up to version `0.3.1`.
|
||||
- Team capacity assumptions:
|
||||
- Maintained as a small codebase where simple patterns and direct controller logic are preferred over framework-heavy abstractions.
|
||||
- Third-party dependencies:
|
||||
- `ffmpeg`, `ffprobe`, and `cpulimit`.
|
||||
- TMDB API access through `TMDB_API_KEY` for metadata enrichment.
|
||||
- Installation assumptions:
|
||||
- The Python-side bundle install step and optional Python test extras are managed by `tools/setup.sh`, with `ffx setup` as the aligned wrapper after bootstrap.
|
||||
- The workstation-preparation step is managed separately by `tools/configure_workstation.sh` or `ffx configure_workstation`.
|
||||
|
||||
## Acceptance Scope
|
||||
|
||||
- First release boundary:
|
||||
- Local installation through `pip`.
|
||||
- Working SQLite-backed rule storage.
|
||||
- Functional CLI conversion and inspection workflows.
|
||||
- Textual CRUD flows for shows, patterns, tags, tracks, and shifted seasons.
|
||||
- TMDB-assisted filename generation, subtitle import, season shifting, database versioning, and configurable output filename templating.
|
||||
- Excluded follow-up ideas:
|
||||
- Completing placeholder screens such as settings and help.
|
||||
- Hardening platform portability beyond Linux-like systems.
|
||||
- Broader media types, richer release packaging, and production-grade background processing.
|
||||
- Demonstration scenario:
|
||||
- Inspect a TV episode file, define or update the matching show and pattern in the TUI, then run `ffx convert` so the result uses the stored stream schema, optional TMDB episode naming, and a normalized output filename.
|
||||
177
requirements/shifted_seasons_handling.md
Normal file
177
requirements/shifted_seasons_handling.md
Normal file
@@ -0,0 +1,177 @@
|
||||
# Numbering Mapping Handling
|
||||
|
||||
This file defines the behavioral contract for mapping source season and episode
|
||||
numbering to target season and episode numbering through stored shifted-season
|
||||
rules.
|
||||
|
||||
Primary sources:
|
||||
- `requirements/project.md`
|
||||
- `requirements/architecture.md`
|
||||
- actual tool code in `src/ffx/`
|
||||
|
||||
Secondary source:
|
||||
- `SCRATCHPAD.md`, used only to clarify current hardening gaps and not as the
|
||||
primary contract source.
|
||||
|
||||
## Scope
|
||||
|
||||
- Persisting shifted-season rules in SQLite.
|
||||
- Allowing shifted-season rules to be attached either to a show or to a
|
||||
specific pattern.
|
||||
- Selecting at most one active shifted-season rule for one concrete source
|
||||
season and episode tuple.
|
||||
- Applying additive season and episode offsets to produce target numbering.
|
||||
- Using shifted target numbering during `convert` for TMDB episode lookup and
|
||||
generated season and episode filename tokens.
|
||||
- Managing show-level default mappings and pattern-level override mappings from
|
||||
the Textual editing workflows.
|
||||
|
||||
## Out Of Scope
|
||||
|
||||
- General filename parsing rules for detecting season and episode values.
|
||||
- Standalone `rename` command behavior, which currently uses explicit rename
|
||||
inputs rather than stored shifted-season rules.
|
||||
- Stream or track mapping behavior unrelated to season and episode numbering.
|
||||
|
||||
## Terms
|
||||
|
||||
- `shifted-season rule`: one persisted row describing how one source-numbering
|
||||
range maps to target numbering through additive offsets.
|
||||
- `show-level shifted-season rule`: a rule attached directly to a show and used
|
||||
as the fallback mapping layer for that show.
|
||||
- `pattern-level shifted-season rule`: a rule attached directly to a pattern and
|
||||
used as the override mapping layer for that pattern.
|
||||
- `source numbering`: the season and episode values detected from the current
|
||||
source file or supplied as source-side conversion inputs before shifting.
|
||||
- `target numbering`: the season and episode values after one active
|
||||
shifted-season rule has been applied.
|
||||
- `original season`: the source-domain season number a shifted-season rule is
|
||||
eligible to match.
|
||||
- `episode range`: the optional source-domain episode interval covered by one
|
||||
shifted-season rule.
|
||||
- `open bound`: an unbounded start or end of the episode range. Current storage
|
||||
uses `-1` as the internal sentinel for an open bound.
|
||||
- `active shifted-season rule`: the single rule selected for one concrete input
|
||||
after precedence resolution.
|
||||
- `identity mapping`: the default `1:1` outcome where source numbering is used
|
||||
unchanged.
|
||||
|
||||
## Rules
|
||||
|
||||
- `SHIFTED_SEASONS_HANDLING-0001`: The domain model shall allow a
|
||||
shifted-season rule to be owned by exactly one of:
|
||||
- one show
|
||||
- one pattern
|
||||
- `SHIFTED_SEASONS_HANDLING-0002`: A single shifted-season rule shall not
|
||||
belong to both a show and a pattern at the same time.
|
||||
- `SHIFTED_SEASONS_HANDLING-0003`: A shifted-season rule shall carry these
|
||||
fields: `original_season`, `first_episode`, `last_episode`,
|
||||
`season_offset`, and `episode_offset`.
|
||||
- `SHIFTED_SEASONS_HANDLING-0004`: `season_offset` and `episode_offset` shall
|
||||
be additive signed integers applied to matched source numbering to produce
|
||||
target numbering.
|
||||
- `SHIFTED_SEASONS_HANDLING-0005`: A shifted-season rule shall match a source
|
||||
tuple only when:
|
||||
- the source season equals `original_season`
|
||||
- the source episode is greater than or equal to `first_episode` when the
|
||||
lower bound is closed
|
||||
- the source episode is less than or equal to `last_episode` when the upper
|
||||
bound is closed
|
||||
- `SHIFTED_SEASONS_HANDLING-0006`: An open lower or upper episode bound shall
|
||||
represent an unbounded side of the covered source episode range.
|
||||
- `SHIFTED_SEASONS_HANDLING-0007`: If one shifted-season rule matches, target
|
||||
numbering shall be:
|
||||
- `target season = source season + season_offset`
|
||||
- `target episode = source episode + episode_offset`
|
||||
- `SHIFTED_SEASONS_HANDLING-0008`: If no shifted-season rule matches, source
|
||||
numbering shall pass through unchanged.
|
||||
- `SHIFTED_SEASONS_HANDLING-0009`: Shifted-season handling shall operate in a
|
||||
source-to-target numbering model. Stored rules map detected source numbering
|
||||
to the target numbering used by conversion-facing metadata and output naming.
|
||||
- `SHIFTED_SEASONS_HANDLING-0010`: Pattern matching identifies the owning show
|
||||
and optionally a more specific owning pattern. Resolution of the active
|
||||
shifted-season rule shall use this precedence order:
|
||||
- matching pattern-level rule
|
||||
- matching show-level rule
|
||||
- identity mapping
|
||||
- `SHIFTED_SEASONS_HANDLING-0011`: At most one shifted-season rule may be
|
||||
active for one concrete source season and episode tuple. Shifted-season rules
|
||||
shall never stack or compose.
|
||||
- `SHIFTED_SEASONS_HANDLING-0012`: Within one owner scope, shifted-season rules
|
||||
shall not overlap in their effective episode coverage for the same
|
||||
`original_season`.
|
||||
- `SHIFTED_SEASONS_HANDLING-0013`: If a shifted-season rule uses two closed
|
||||
episode bounds, `last_episode` shall be greater than or equal to
|
||||
`first_episode`.
|
||||
- `SHIFTED_SEASONS_HANDLING-0014`: Shifted-season rule evaluation shall be
|
||||
deterministic. Released behavior shall not depend on arbitrary database row
|
||||
order when invalid overlapping rules exist.
|
||||
- `SHIFTED_SEASONS_HANDLING-0015`: A pattern-level rule is permitted to map to
|
||||
zero offsets. Such a rule is a valid explicit override that beats show-level
|
||||
fallback and produces identity mapping for its covered source range.
|
||||
- `SHIFTED_SEASONS_HANDLING-0016`: During `convert`, when show, season, and
|
||||
episode values are available and stored shifting is active, the shifted target
|
||||
numbering shall drive:
|
||||
- TMDB episode lookup
|
||||
- season and episode filename tokens such as `S01E02`
|
||||
- generated episode basenames that include season and episode numbering
|
||||
- `SHIFTED_SEASONS_HANDLING-0017`: When conversion is supplied explicit
|
||||
target-domain season or episode values for TMDB naming, the system shall not
|
||||
apply stored shifting on top of those already-targeted values.
|
||||
- `SHIFTED_SEASONS_HANDLING-0018`: Operator-facing editing shall expose
|
||||
shifted-season rule management in both of these places:
|
||||
- show editing for show-level default mappings
|
||||
- pattern editing for pattern-level override mappings
|
||||
- `SHIFTED_SEASONS_HANDLING-0019`: User-facing shifted-season editing should
|
||||
present open episode bounds as a natural empty-state input rather than forcing
|
||||
operators to type the internal sentinel directly.
|
||||
|
||||
## Acceptance
|
||||
|
||||
- A show can exist with zero or more show-level shifted-season rules.
|
||||
- A pattern can exist with zero or more pattern-level shifted-season rules.
|
||||
- A shifted-season rule is stored against exactly one owner scope.
|
||||
- A source tuple matching a pattern-level rule yields target numbering from that
|
||||
rule even when a matching show-level rule also exists.
|
||||
- A source tuple matching no pattern-level rule but matching a show-level rule
|
||||
yields target numbering from the show-level rule.
|
||||
- A source tuple matching neither scope yields identity mapping.
|
||||
- A pattern-level zero-offset rule can explicitly override a nonzero show-level
|
||||
rule for the same covered source range.
|
||||
- Two shifted-season rules for the same owner scope and original season cannot
|
||||
both be valid if they cover overlapping episode ranges.
|
||||
- During `convert`, shifted numbering is what TMDB episode lookup and generated
|
||||
season and episode tokens see when stored shifting is active.
|
||||
- The TUI can display and maintain shifted-season rules from both the show and
|
||||
pattern editing flows.
|
||||
|
||||
## Current Code Fit
|
||||
|
||||
- `src/ffx/model/show.py` and `src/ffx/model/pattern.py` now both expose
|
||||
shifted-season relationships, and `src/ffx/model/shifted_season.py` stores
|
||||
each rule against exactly one owner scope through `show_id` or `pattern_id`.
|
||||
- `src/ffx/shifted_season_controller.py` now resolves mappings with
|
||||
pattern-over-show precedence and applies at most one active rule for a source
|
||||
tuple.
|
||||
- `src/ffx/show_details_screen.py`,
|
||||
`src/ffx/shifted_season_details_screen.py`, and
|
||||
`src/ffx/shifted_season_delete_screen.py` provide reusable shifted-season
|
||||
editing dialogs, and `src/ffx/pattern_details_screen.py` now exposes the
|
||||
pattern-level override flow.
|
||||
- `src/ffx/cli.py` now resolves shifted numbering during `convert` from:
|
||||
pattern-level match, then show-level match, then identity mapping.
|
||||
- `src/ffx/database.py` now migrates version-2 databases to version 3 by
|
||||
preserving existing show-level rows and extending the schema for pattern-level
|
||||
ownership.
|
||||
|
||||
## Risks
|
||||
|
||||
- The current CLI groups `--show`, `--season`, and `--episode` under one
|
||||
override bucket used for TMDB-related behavior. Source-domain versus
|
||||
target-domain semantics of each override must stay documented clearly so
|
||||
stored shifting is neither skipped nor double-applied unexpectedly.
|
||||
- Existing version-2 databases only contain show-owned shifted-season rows, so a
|
||||
version-3 migration must preserve those rows as the show-level fallback layer.
|
||||
- Current modern automated test coverage for shifted-season behavior is light,
|
||||
so precedence, migration, and convert-time numbering behavior need focused
|
||||
tests.
|
||||
90
requirements/source_file_formats.md
Normal file
90
requirements/source_file_formats.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Source File Formats
|
||||
|
||||
This file defines source-file-format-specific processing requirements for FFX.
|
||||
It is intended to grow as additional relevant source file types are identified.
|
||||
|
||||
The first covered format is Matroska media that contains styled ASS/SSA
|
||||
subtitle streams together with embedded font attachments.
|
||||
|
||||
## Scope
|
||||
|
||||
- Detecting source files that use ASS subtitle streams together with embedded
|
||||
font attachments needed for correct rendering.
|
||||
- Defining the required `ffx convert` behavior when this format is present.
|
||||
- Preserving the required attachment streams during conversion.
|
||||
- Keeping normal subtitle-track manipulation behavior for the ASS subtitle
|
||||
tracks themselves.
|
||||
|
||||
## Out Of Scope
|
||||
|
||||
- General subtitle behavior for sources that do not carry this pattern.
|
||||
- A complete catalog of all source file formats FFX may support later.
|
||||
|
||||
## Terms
|
||||
|
||||
- `styled ASS source`: a source media file that contains one or more subtitle
|
||||
streams with `codec_type="subtitle"` and `codec_name="ass"` together with
|
||||
one or more font-bearing attachment streams.
|
||||
- `font attachment`: an attachment stream whose metadata identifies a font
|
||||
payload, commonly through `tags.mimetype` and attachment filename metadata.
|
||||
- `external subtitle feed`: subtitle tracks supplied from separate subtitle
|
||||
files through the existing subtitle-import path.
|
||||
- `special attachment subtracks`: the embedded font attachment streams that
|
||||
belong to the styled ASS source pattern.
|
||||
|
||||
## Rules
|
||||
|
||||
- `SOURCE_FILE_FORMATS-0001`: The system shall recognize the styled ASS source
|
||||
pattern.
|
||||
- `SOURCE_FILE_FORMATS-0002`: Recognition shall not depend on fixed stream
|
||||
counts, fixed stream indices, or one exact attachment count.
|
||||
- `SOURCE_FILE_FORMATS-0003`: Recognition shall use the best available ffprobe
|
||||
signals. For known subtitle streams this includes
|
||||
`codec_type="subtitle"` together with `codec_name="ass"`.
|
||||
- `SOURCE_FILE_FORMATS-0004`: Recognition of the special attachment subtracks
|
||||
shall use attachment-oriented signals such as `codec_type="attachment"` and
|
||||
font-identifying metadata such as `tags.mimetype="font/ttf"` when present.
|
||||
- `SOURCE_FILE_FORMATS-0005`: Recognition shall tolerate known ffprobe
|
||||
variation in attachment reporting, including files where attachment streams
|
||||
do not expose a `codec_name` but do expose `codec_type="attachment"` and
|
||||
font-identifying tags.
|
||||
- `SOURCE_FILE_FORMATS-0006`: When attachment metadata varies across files,
|
||||
detection shall not depend on one exact MIME string alone. Detection shall
|
||||
be written so the known pattern can vary while still recognizing font
|
||||
attachments.
|
||||
- `SOURCE_FILE_FORMATS-0007`: When the styled ASS source pattern is detected,
|
||||
`ffx convert` shall emit an operator-facing message that reports the
|
||||
detection and hints that special subtitle preservation handling is being
|
||||
applied.
|
||||
- `SOURCE_FILE_FORMATS-0008`: When the styled ASS source pattern is present on
|
||||
the source file, `ffx convert` shall not process an external subtitle feed.
|
||||
The command shall stop before conversion and report an error that explains
|
||||
that separate subtitle-file import is incompatible with this source format.
|
||||
- `SOURCE_FILE_FORMATS-0009`: Normal manipulation of the ASS subtitle streams
|
||||
themselves shall continue to work through the usual selection, ordering,
|
||||
metadata, language, title, and disposition handling paths.
|
||||
- `SOURCE_FILE_FORMATS-0010`: The special attachment subtracks shall be
|
||||
preserved in the target media file as-is rather than transcoded,
|
||||
regenerated, or replaced from external sources.
|
||||
- `SOURCE_FILE_FORMATS-0011`: Preserving the special attachment subtracks
|
||||
as-is includes retaining the attachment payload and the attachment metadata
|
||||
required by consumers, especially attachment filename and mimetype
|
||||
information.
|
||||
- `SOURCE_FILE_FORMATS-0012`: This file shall remain the extension point for
|
||||
additional source-file-format contracts as FFX adds support for more special
|
||||
source formats.
|
||||
|
||||
## Acceptance
|
||||
|
||||
- A source file matching the observed pattern of embedded ASS subtitles plus
|
||||
font attachments is recognized even when the attachment streams do not carry
|
||||
a `codec_name`.
|
||||
- `ffx convert` output contains a clear detection message before the actual
|
||||
conversion work proceeds.
|
||||
- If external subtitle import is requested for such a source file, the command
|
||||
fails fast with an explicit error instead of mixing sidecar subtitles into
|
||||
the job.
|
||||
- Existing manipulation of the ASS subtitle tracks still works for metadata,
|
||||
titles, languages, ordering, and dispositions.
|
||||
- The output media preserves the required font attachment streams and their
|
||||
identifying metadata needed by downstream media players.
|
||||
74
requirements/subtrack_mapping.md
Normal file
74
requirements/subtrack_mapping.md
Normal file
@@ -0,0 +1,74 @@
|
||||
# Subtrack Mapping
|
||||
|
||||
This file defines the behavioral contract for mapping input subtracks to output
|
||||
subtracks during conversion.
|
||||
|
||||
Primary source: actual tool code in `src/ffx/`.
|
||||
Secondary source: `tests/legacy/`, used only to clarify intent and reveal gaps.
|
||||
|
||||
## Scope
|
||||
|
||||
- Ensuring each target subtrack is created from the corresponding source-subtrack information, including stream-level metadata.
|
||||
- Mapping input streams to output streams during conversion.
|
||||
- Using persisted pattern-track definitions from the database as the target schema.
|
||||
- Allowing omission and reordering of retained tracks.
|
||||
- Keeping stream-level metadata attached to the correct source-derived logical track after remapping.
|
||||
- Normalizing target output into ordered track groups: video, audio, subtitle, then special types such as fonts or images.
|
||||
|
||||
## Terms
|
||||
|
||||
- `source_index`: identity of the originating input stream from ffprobe or an imported source descriptor.
|
||||
- `index`: final output-track order across all retained tracks.
|
||||
- `sub_index`: per-type position within the retained tracks of one type, for example audio stream `0` or subtitle stream `1`.
|
||||
- `target schema`: stored or constructed output-track definition that decides which tracks are kept, omitted, reordered, and rewritten.
|
||||
- `separate source file`: additional file bound to one target track slot whose media payload replaces the regular source payload for that slot.
|
||||
|
||||
## Rules
|
||||
|
||||
- `SUBTRACK_MAPPING-0001`: The system shall represent source-stream identity separately from output order. `source_index`, `index`, and `sub_index` are distinct concepts and shall not be collapsed into one field.
|
||||
- `SUBTRACK_MAPPING-0002`: The system shall derive `source_index` for probed tracks from the original ffprobe stream index and preserve that identity through conversion planning.
|
||||
- `SUBTRACK_MAPPING-0003`: Pattern-backed track definitions stored in the database shall persist both target output order and originating source-stream identity.
|
||||
- `SUBTRACK_MAPPING-0004`: When a filename matches a pattern, the pattern target schema shall be the source of truth for which source tracks are retained, which are omitted, and in what order retained tracks appear in the output.
|
||||
- `SUBTRACK_MAPPING-0005`: A target track may refer only to an existing source track of the same type. Conversion shall fail fast when a target track refers to a nonexistent source stream or a source stream of a different type.
|
||||
- `SUBTRACK_MAPPING-0006`: The ffmpeg mapping phase shall be generated from target output order while resolving each retained output track back to its originating source stream via `source_index`.
|
||||
- `SUBTRACK_MAPPING-0007`: Reordering and omission shall preserve logical track identity. Stream-level metadata, titles, languages, and disposition decisions shall stay attached to the correct source-derived logical track after mapping.
|
||||
- `SUBTRACK_MAPPING-0008`: The system shall support one-off CLI stream-order overrides without requiring prior database edits.
|
||||
- `SUBTRACK_MAPPING-0009`: Operator-facing inspection and editing surfaces shall expose enough source-versus-target information to let a user reason about subtrack mapping decisions.
|
||||
- `SUBTRACK_MAPPING-0010`: Test coverage for subtrack mapping shall assert source-derived identity, omission, and output order explicitly. Final track counts or final type sequences alone are insufficient proof of correct mapping.
|
||||
- `SUBTRACK_MAPPING-0011`: Retained target tracks shall appear in ordered groups: video track or tracks first, then audio tracks, then subtitle tracks, then special types such as fonts or images. Within each group, the target schema shall define the order.
|
||||
- `SUBTRACK_MAPPING-0012`: Track omission is valid when required by output compatibility, when needed to normalize source tracks into the required target group order and schema, or when explicitly requested by database rules or CLI options.
|
||||
- `SUBTRACK_MAPPING-0013`: If source tracks do not already comply with the required target group order, conversion shall reorder retained tracks to match the target ordering contract without losing source-track identity or stream-level metadata lineage.
|
||||
|
||||
## Separate Additional Source Files
|
||||
|
||||
- `SUBTRACK_MAPPING-0014`: A separate source file may substitute the media payload of one target subtrack without changing that target track's intended output position.
|
||||
- `SUBTRACK_MAPPING-0015`: When a separate source file is used, the target track shall remain bound to the corresponding logical source track for mapping, validation, and metadata lineage.
|
||||
- `SUBTRACK_MAPPING-0016`: Metadata for a substituted target track shall be merged from the regular source track and the separate source file when available.
|
||||
- `SUBTRACK_MAPPING-0017`: If the separate source file provides a metadata field that is also present on the regular source track, the separate source file value shall win in the target output.
|
||||
- `SUBTRACK_MAPPING-0018`: If a metadata field is absent from the separate source file, the system shall fall back to the corresponding metadata from the regular source track or target schema rewrite rules.
|
||||
|
||||
## Acceptance
|
||||
|
||||
- Given a source media descriptor and a pattern-backed target schema, the planned output tracks can be listed in final output order and each retained track can still be traced to one originating source stream.
|
||||
- Planned output order follows grouped target order: video, audio, subtitle, then special types.
|
||||
- Tracks not referenced by the target schema are omitted from output mapping.
|
||||
- Tracks may also be omitted when they are incompatible with the chosen output format or explicitly excluded by database or CLI rules.
|
||||
- Two retained target tracks never originate from the same source stream unless duplication is implemented explicitly as a separate feature.
|
||||
- If target-track metadata is rewritten after reordering, it is written onto the correct source-derived logical track rather than the track that merely occupies the same final output position.
|
||||
- Invalid target-to-source references fail deterministically before the conversion job is launched.
|
||||
- If a separate source file substitutes one target track, that track keeps its target slot and ordering while metadata is merged with separate-file values taking precedence when both sides provide the same field.
|
||||
- A test proving subtrack mapping must assert at least one of: exact `source_index` to output-order mapping, omission of named source tracks, or preservation of per-track metadata after reorder.
|
||||
|
||||
## Test Notes
|
||||
|
||||
- `tests/legacy/scenario.py` names pattern behavior as `Filter/Reorder Tracks`.
|
||||
- `tests/legacy/scenario_4.py` is the strongest end-to-end signal because it runs DB-backed conversion and reapplies source indices before assertion.
|
||||
- `tests/legacy/track_tag_combinator_2_0.py` and `tests/legacy/track_tag_combinator_3_4.py` sort result tracks by `source_index` before checking tags, which matches the intended identity model.
|
||||
- Legacy permutation combinators define permutations but their assertion functions are stubs.
|
||||
- Some legacy scenarios produce `AP` and `SP` selectors but do not execute them.
|
||||
|
||||
## Risks
|
||||
|
||||
- `src/ffx/media_descriptor.py` contains an explicit `rearrangeTrackDescriptors()` path whose current implementation appears defective and under-tested.
|
||||
- Separate-source-file metadata precedence is only partly expressed in current implementation paths and should be covered directly in the rewritten test suite.
|
||||
- Production code expresses the mapping contract more clearly than the legacy harness, so a rewrite should add direct logic-level tests for mapping and reorder planning.
|
||||
144
requirements/tests.md
Normal file
144
requirements/tests.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# Test Rewrite
|
||||
|
||||
This file captures the structure executed by `tests/legacy_runner.py` today and
|
||||
defines the target shape for a complete rewrite.
|
||||
|
||||
Detailed product rules for source-to-target subtrack mapping live in
|
||||
`requirements/subtrack_mapping.md`. This file describes only how tests cover
|
||||
that area.
|
||||
|
||||
## Interpreter Requirement
|
||||
|
||||
- Agents shall run Python-side test commands with `~/.local/share/ffx.venv/bin/python`.
|
||||
- This applies to the legacy harness, `unittest`, `pytest`, helper scripts, and `python -m ffx ...` test invocations.
|
||||
- Agents shall not silently substitute `python`, `python3`, or another interpreter for Python-side test work.
|
||||
- If `~/.local/share/ffx.venv/bin/python` is missing or not executable, agents shall stop and report the missing venv instead of continuing with Python-side test execution.
|
||||
|
||||
## Shell Environment Requirement
|
||||
|
||||
- Agents shall source `~/.bashrc` from an interactive Bash shell before running TMDB-dependent test commands or TMDB-dependent `python -m ffx ...` test invocations.
|
||||
- Agents shall not source `~/.bashrc.d/interactive/77_tmdb.sh` directly for normal test work; `~/.bashrc` is the required entry point.
|
||||
- In automation this means agents shall use an interactive Bash invocation such as `bash -ic 'source ~/.bashrc && ...'`, because a non-interactive `bash -lc` returns from `~/.bashrc` before the interactive fragments are loaded.
|
||||
- If sourcing `~/.bashrc` still does not provide required shell environment such as `TMDB_API_KEY`, agents shall stop and report the missing environment instead of continuing with TMDB-dependent test execution.
|
||||
|
||||
## Current Harness
|
||||
|
||||
- Entrypoint: `~/.local/share/ffx.venv/bin/python tests/legacy_runner.py run`
|
||||
- Runner style: custom Click CLI, not `pytest` or `unittest`
|
||||
- Commands:
|
||||
- `run`: discover scenario files, instantiate each scenario, run yielded jobs
|
||||
- `dupe`: helper command that creates duplicate media fixtures; not part of the test run
|
||||
- Filters: `--scenario`, `--variant`, `--limit`
|
||||
- Shared context:
|
||||
- builds one mutable dict for the whole run
|
||||
- installs loggers and writes `ffx_test_report.log`
|
||||
- creates `ConfigurationController` eagerly
|
||||
- tracks only passed and failed counters
|
||||
- Discovery:
|
||||
- scenario files: `tests/legacy/scenario_*.py`
|
||||
- combinators: `glob + importlib + inspect` by filename convention
|
||||
- ordering: implicit glob order, no explicit sorting
|
||||
- Skip behavior:
|
||||
- Scenario 4 is skipped when `TMDB_API_KEY` is missing
|
||||
- only `TMDB_API_KEY_NOT_PRESENT_EXCEPTION` is caught at scenario construction time
|
||||
|
||||
## Current Scenarios
|
||||
|
||||
- `1`: `tests/legacy/scenario_1.py`
|
||||
- focus: basename generation without pattern lookup or TMDB
|
||||
- inputs per job: `1`
|
||||
- jobs: `140`
|
||||
- expected failures: `0`
|
||||
- execution: build one synthetic source file, run `~/.local/share/ffx.venv/bin/python -m ffx convert`, assert filename selectors only
|
||||
- selectors executed: `B`, `L`, `I`
|
||||
- selectors defined but not executed: `S`, `R`
|
||||
- `2`: `tests/legacy/scenario_2.py`
|
||||
- focus: conversion matrix over media layouts, dispositions, tags, and permutations
|
||||
- inputs per job: `1`
|
||||
- jobs: `8193`
|
||||
- expected failures: `3267`
|
||||
- execution: build one synthetic source file, run `~/.local/share/ffx.venv/bin/python -m ffx convert`, probe result with `FileProperties`, assert track layout and selected audio and subtitle metadata
|
||||
- selectors executed: `M`, `AD`, `AT`, `SD`, `ST`
|
||||
- selectors defined but not executed: `MT`, `AP`, `SP`, `J`
|
||||
- `4`: `tests/legacy/scenario_4.py`
|
||||
- focus: pattern-driven batch conversion with SQLite state and live TMDB naming
|
||||
- inputs per job: `6`
|
||||
- jobs: `768`
|
||||
- expected failures: `336`
|
||||
- execution: build six synthetic preset files, recreate temp SQLite DB, insert show and pattern, run one batch convert command via `~/.local/share/ffx.venv/bin/python`, query TMDB during assertions
|
||||
- selectors executed: `M`, `AD`, `AT`, `SD`, `ST`
|
||||
- selectors defined but not executed: `MT`, `AP`, `SP`, `J`
|
||||
- notes:
|
||||
- uses `MediaCombinator6` only
|
||||
- issues live HTTP requests through `TmdbController` with no request cache
|
||||
|
||||
## Current Combinator Families
|
||||
|
||||
- scenario files discovered: `3`
|
||||
- basename combinators discovered: `2`
|
||||
- media combinators discovered: `8`
|
||||
- media tag combinators discovered: `3`
|
||||
- disposition combinator 2 variants: `4`
|
||||
- disposition combinator 3 variants: `5`
|
||||
- track tag combinator 2 variants: `4`
|
||||
- track tag combinator 3 variants: `5`
|
||||
- indicator variants: `7`
|
||||
- label variants: `2`
|
||||
- show variants: `3`
|
||||
- release variants: `3`
|
||||
- permutation 2 variants: `2`
|
||||
- permutation 3 variants: `3`
|
||||
|
||||
## Current Totals
|
||||
|
||||
- full run without TMDB: `8333`
|
||||
- full run with TMDB: `9101`
|
||||
- Scenario 4 generated source files: `4608`
|
||||
- Scenario 4 live TMDB episode queries: `4608`
|
||||
|
||||
## Current Behavior Areas
|
||||
|
||||
- output basename rules for label, season and episode indicator, show name, and release suffix combinations
|
||||
- track layout normalization across the eight media combinator shapes from `VA` through `VAASSS`
|
||||
- two-track and three-track disposition edge cases, including intentional failure cases
|
||||
- two-track and three-track track-tag preservation checks, including checks that sort results by source identity
|
||||
- container-level media tag handling
|
||||
- pattern-backed conversion against a temporary SQLite database
|
||||
- TMDB-assisted episode naming for batch conversion
|
||||
|
||||
## Structural Findings
|
||||
|
||||
- The suite is process-heavy: most jobs run `ffmpeg` to generate a fixture and then spawn the FFX CLI as a subprocess.
|
||||
- The suite is integration-first and has almost no isolated unit-level coverage for pure logic.
|
||||
- The base `Combinator` class is a placeholder and is not the real abstraction boundary used by the suite.
|
||||
- Many combinator methods are placeholders: there are `25` `pass` statements across the current test modules.
|
||||
- Several assertion families are never executed because scenario selector dispatch is incomplete.
|
||||
- Scenario comments mention a Scenario 3, but no `scenario_3.py` exists.
|
||||
- `tests/legacy/_basename_combinator_1.py` is effectively orphaned because discovery only matches `basename_combinator_*.py`.
|
||||
- `tests/legacy/disposition_combinator_2_3 .py` contains an embedded space in the filename and is still part of discovery.
|
||||
- Expected failures are validated only as subprocess return-code matches, not as specific error types or messages.
|
||||
- The current suite depends on `ffmpeg`, `ffprobe`, SQLite, the local Python environment, and for Scenario 4 a live TMDB API key plus network access.
|
||||
|
||||
## Rewrite Target
|
||||
|
||||
- Replace the custom Click harness with a standard test runner, preferably `pytest`.
|
||||
- Split the suite into explicit layers: unit, integration, and optional external-system tests.
|
||||
- Keep unit tests as the default path and make them runnable without `ffmpeg`, `ffprobe`, TMDB, or a user config directory.
|
||||
- Model discovery explicitly in code instead of relying on glob-plus-reflection naming conventions.
|
||||
- Convert the current Cartesian-product combinators into readable parametrized cases grouped by behavior area.
|
||||
- Preserve the current behavior areas, but represent them with targeted cases instead of thousands of opaque variant IDs.
|
||||
- Make every assertion family explicit and executable; there must be no selector that is produced but never consumed.
|
||||
- Replace live TMDB access with fixtures or mocks in normal runs; any live-contract test must be opt-in.
|
||||
- Replace ad hoc subprocess return-code checks with assertions on typed exceptions, stderr content, or structured outputs.
|
||||
- Provide small reusable media fixtures or fixture builders so only a narrow integration slice needs `ffmpeg`-generated media.
|
||||
- Make database tests self-contained and fast through temporary databases and direct controller-level assertions.
|
||||
- Make ordering, naming, and selection deterministic so a contributor can predict exactly what will run.
|
||||
- Expose a small smoke suite for quick local runs and CI, plus a separately marked slower integration suite.
|
||||
- Prefer domain-oriented test modules over combinator-family modules: basename, pattern matching, metadata rewrite, track ordering, TMDB naming, CLI smoke, and failure handling.
|
||||
|
||||
## Rewrite Acceptance
|
||||
|
||||
- A default local test run finishes quickly and without network access.
|
||||
- A contributor can identify which behavior a failing test covers without decoding variant strings like `VAASSS-A:D10-S:T001`.
|
||||
- All current intended failure behaviors remain covered, but each one is asserted directly and readably.
|
||||
- The rewritten suite can be adopted by CI without requiring live TMDB credentials.
|
||||
67
src/ffx/attachment_format.py
Normal file
67
src/ffx/attachment_format.py
Normal file
@@ -0,0 +1,67 @@
|
||||
from enum import Enum
|
||||
import os
|
||||
|
||||
|
||||
class AttachmentFormat(Enum):
|
||||
|
||||
TTF = {'identifier': 'ttf', 'format': None, 'extension': 'ttf', 'label': 'TTF'}
|
||||
PNG = {'identifier': 'png', 'format': None, 'extension': 'png', 'label': 'PNG'}
|
||||
|
||||
UNKNOWN = {'identifier': 'unknown', 'format': None, 'extension': None, 'label': 'UNKNOWN'}
|
||||
|
||||
def identifier(self):
|
||||
return str(self.value['identifier'])
|
||||
|
||||
def label(self):
|
||||
return str(self.value['label'])
|
||||
|
||||
def format(self):
|
||||
return self.value['format']
|
||||
|
||||
def extension(self):
|
||||
return str(self.value['extension'])
|
||||
|
||||
@staticmethod
|
||||
def identify(identifier: str):
|
||||
formats = [f for f in AttachmentFormat if f.value['identifier'] == str(identifier)]
|
||||
if formats:
|
||||
return formats[0]
|
||||
return AttachmentFormat.UNKNOWN
|
||||
|
||||
@staticmethod
|
||||
def identifyFfprobeStream(streamObj: dict):
|
||||
identifier = streamObj.get("codec_name")
|
||||
identifiedFormat = AttachmentFormat.identify(identifier)
|
||||
if identifiedFormat != AttachmentFormat.UNKNOWN:
|
||||
return identifiedFormat
|
||||
|
||||
if str(streamObj.get("codec_type", "")).strip() != "attachment":
|
||||
return AttachmentFormat.UNKNOWN
|
||||
|
||||
tags = streamObj.get("tags", {}) or {}
|
||||
mimetype = str(tags.get("mimetype", "")).strip().lower()
|
||||
filename = str(tags.get("filename", "")).strip().lower()
|
||||
filenameExtension = os.path.splitext(filename)[1]
|
||||
|
||||
if (
|
||||
mimetype in {
|
||||
"font/ttf",
|
||||
"application/x-truetype-font",
|
||||
"application/x-font-ttf",
|
||||
}
|
||||
or "truetype" in mimetype
|
||||
or filenameExtension == ".ttf"
|
||||
):
|
||||
return AttachmentFormat.TTF
|
||||
|
||||
if mimetype in {"image/png", "image/x-png"} or filenameExtension == ".png":
|
||||
return AttachmentFormat.PNG
|
||||
|
||||
return AttachmentFormat.UNKNOWN
|
||||
|
||||
@staticmethod
|
||||
def fromTrackCodec(trackCodec):
|
||||
identifier = getattr(trackCodec, "identifier", None)
|
||||
if callable(identifier):
|
||||
return AttachmentFormat.identify(trackCodec.identifier())
|
||||
return AttachmentFormat.UNKNOWN
|
||||
@@ -252,9 +252,15 @@ def buildRenameTargetFilename(
|
||||
@click.pass_context
|
||||
@click.option('--language', 'app_language', type=str, default='', help='Set application language')
|
||||
@click.option('--database-file', type=str, default='', help='Path to database file')
|
||||
@click.option(
|
||||
'--debug',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help='Enable debug-only TUI diagnostics such as the log pane',
|
||||
)
|
||||
@click.option('-v', '--verbose', type=int, default=0, help='Set verbosity of output')
|
||||
@click.option("--dry-run", is_flag=True, default=False)
|
||||
def ffx(ctx, app_language, database_file, verbose, dry_run):
|
||||
def ffx(ctx, app_language, database_file, debug, verbose, dry_run):
|
||||
"""FFX"""
|
||||
|
||||
ctx.obj = {}
|
||||
@@ -274,6 +280,7 @@ def ffx(ctx, app_language, database_file, verbose, dry_run):
|
||||
)
|
||||
set_current_language(resolvedLanguage)
|
||||
ctx.obj['language'] = resolvedLanguage
|
||||
ctx.obj['debug'] = bool(debug)
|
||||
|
||||
if ctx.invoked_subcommand in LIGHTWEIGHT_COMMANDS:
|
||||
ctx.obj['dry_run'] = dry_run
|
||||
@@ -287,6 +294,7 @@ def ffx(ctx, app_language, database_file, verbose, dry_run):
|
||||
|
||||
ctx.obj['dry_run'] = dry_run
|
||||
ctx.obj['verbosity'] = verbose
|
||||
ctx.obj['debug'] = bool(debug)
|
||||
ctx.obj['language'] = resolve_application_language(
|
||||
cli_language=app_language,
|
||||
config_language=ctx.obj['config'].getLanguage(),
|
||||
@@ -391,6 +399,20 @@ def runScriptWrapper(ctx, scriptPath, missingDescription, commandArgs):
|
||||
ctx.exit(completed.returncode)
|
||||
|
||||
|
||||
def runTuiApp(ctx) -> None:
|
||||
from ffx.ffx_app import FfxApp
|
||||
from ffx.logging_utils import set_ffx_console_logging_enabled
|
||||
|
||||
logger = ctx.obj.get('logger')
|
||||
set_ffx_console_logging_enabled(logger, enabled=False)
|
||||
|
||||
try:
|
||||
app = FfxApp(ctx.obj)
|
||||
app.run()
|
||||
finally:
|
||||
set_ffx_console_logging_enabled(logger, enabled=True)
|
||||
|
||||
|
||||
@ffx.command(name='setup')
|
||||
@click.pass_context
|
||||
@click.option('--check', is_flag=True, default=False, help='Only verify bundle-setup readiness')
|
||||
@@ -527,14 +549,11 @@ def inspect(ctx, shift, filenames):
|
||||
if len(filenames) != 1:
|
||||
raise click.ClickException("Inspect without --shift requires exactly one filename.")
|
||||
|
||||
from ffx.ffx_app import FfxApp
|
||||
|
||||
ctx.obj['command'] = 'inspect'
|
||||
ctx.obj['arguments'] = {}
|
||||
ctx.obj['arguments']['filename'] = filenames[0]
|
||||
|
||||
app = FfxApp(ctx.obj)
|
||||
app.run()
|
||||
runTuiApp(ctx)
|
||||
|
||||
|
||||
@ffx.command()
|
||||
@@ -544,8 +563,6 @@ def edit(ctx, filename):
|
||||
if not os.path.isfile(filename):
|
||||
raise click.ClickException(f"File not found: {filename}")
|
||||
|
||||
from ffx.ffx_app import FfxApp
|
||||
|
||||
ctx.obj['command'] = 'edit'
|
||||
ctx.obj['arguments'] = {'filename': filename}
|
||||
ctx.obj['use_pattern'] = False
|
||||
@@ -554,8 +571,7 @@ def edit(ctx, filename):
|
||||
ctx.obj['apply_metadata_normalization'] = True
|
||||
ctx.obj['resource_limits'] = ctx.obj.get('resource_limits', {})
|
||||
|
||||
app = FfxApp(ctx.obj)
|
||||
app.run()
|
||||
runTuiApp(ctx)
|
||||
|
||||
|
||||
@ffx.command()
|
||||
@@ -615,29 +631,33 @@ def rename(ctx, paths, prefix, season, suffix, dry_run):
|
||||
|
||||
|
||||
def getUnmuxSequence(trackDescriptor: TrackDescriptor, sourcePath, targetPrefix, targetDirectory = ''):
|
||||
from ffx.track_codec import TrackCodec
|
||||
from ffx.track_type import TrackType
|
||||
|
||||
# executable and input file
|
||||
commandTokens = list(FFMPEG_COMMAND_TOKENS) + ['-i', sourcePath]
|
||||
|
||||
trackType = trackDescriptor.getType()
|
||||
trackCodec = trackDescriptor.getCodec()
|
||||
trackFormat = trackDescriptor.getFormatDescriptor()
|
||||
|
||||
targetPathBase = os.path.join(targetDirectory, targetPrefix) if targetDirectory else targetPrefix
|
||||
|
||||
# mapping
|
||||
commandTokens += ['-map',
|
||||
f"0:{trackType.indicator()}:{trackDescriptor.getSubIndex()}",
|
||||
'-c',
|
||||
'copy']
|
||||
commandTokens += ['-map', f"0:{trackType.indicator()}:{trackDescriptor.getSubIndex()}"]
|
||||
|
||||
trackCodec = trackDescriptor.getCodec()
|
||||
if trackType == TrackType.VIDEO and trackCodec == TrackCodec.H265:
|
||||
commandTokens += ['-c:v', 'copy', '-bsf:v', 'hevc_mp4toannexb']
|
||||
else:
|
||||
commandTokens += ['-c', 'copy']
|
||||
|
||||
# output format
|
||||
codecFormat = trackCodec.format()
|
||||
codecFormat = trackFormat.format()
|
||||
if codecFormat is not None:
|
||||
commandTokens += ['-f', codecFormat]
|
||||
|
||||
# output filename
|
||||
commandTokens += [f"{targetPathBase}.{trackCodec.extension()}"]
|
||||
commandTokens += [f"{targetPathBase}.{trackFormat.extension()}"]
|
||||
|
||||
return commandTokens
|
||||
|
||||
@@ -752,7 +772,7 @@ def unmux(ctx,
|
||||
if not ctx.obj['dry_run']:
|
||||
|
||||
#TODO #425: Codec Enum
|
||||
ctx.obj['logger'].info(f"Unmuxing stream {trackDescriptor.getIndex()} into file {targetPrefix}.{trackDescriptor.getCodec().extension()}")
|
||||
ctx.obj['logger'].info(f"Unmuxing stream {trackDescriptor.getIndex()} into file {targetPrefix}.{trackDescriptor.getFormatDescriptor().extension()}")
|
||||
|
||||
ctx.obj['logger'].debug(f"Executing unmuxing sequence")
|
||||
|
||||
@@ -837,12 +857,8 @@ def cropdetect(ctx,
|
||||
@click.pass_context
|
||||
|
||||
def shows(ctx):
|
||||
from ffx.ffx_app import FfxApp
|
||||
|
||||
ctx.obj['command'] = 'shows'
|
||||
|
||||
app = FfxApp(ctx.obj)
|
||||
app.run()
|
||||
runTuiApp(ctx)
|
||||
|
||||
|
||||
def checkUniqueDispositions(context, mediaDescriptor: MediaDescriptor):
|
||||
@@ -943,7 +959,6 @@ def checkUniqueDispositions(context, mediaDescriptor: MediaDescriptor):
|
||||
metavar="DURATION|START,DURATION",
|
||||
is_flag=False,
|
||||
flag_value=DEFAULT_CUT_OPTION_VALUE,
|
||||
default=None,
|
||||
callback=normalizeCutOption,
|
||||
help=CUT_OPTION_HELP,
|
||||
)
|
||||
@@ -1299,10 +1314,12 @@ def convert(ctx,
|
||||
sourceMediaDescriptor = mediaFileProperties.getMediaDescriptor()
|
||||
|
||||
|
||||
from ffx.attachment_format import AttachmentFormat
|
||||
|
||||
if ([smd for smd in sourceMediaDescriptor.getSubtitleTracks()
|
||||
if smd.getCodec() == TrackCodec.ASS]
|
||||
and [amd for amd in sourceMediaDescriptor.getAttachmentTracks()
|
||||
if amd.getCodec() == TrackCodec.TTF]):
|
||||
if amd.getAttachmentFormat() == AttachmentFormat.TTF]):
|
||||
|
||||
targetFormat = ''
|
||||
targetExtension = 'mkv'
|
||||
|
||||
@@ -3,6 +3,7 @@ from textual.screen import Screen
|
||||
from textual.widgets import Button, Footer, Header, Static
|
||||
|
||||
from .i18n import t
|
||||
from .screen_support import build_screen_log_pane
|
||||
|
||||
class ConfirmScreen(Screen):
|
||||
|
||||
@@ -58,8 +59,16 @@ class ConfirmScreen(Screen):
|
||||
yield Button(self.__confirmLabel, id="confirm_button")
|
||||
yield Button(self.__cancelLabel, id="cancel_button")
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
|
||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
||||
if event.button.id == "confirm_button":
|
||||
self.dismiss(True)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
VERSION='0.2.6'
|
||||
VERSION='0.3.1'
|
||||
DATABASE_VERSION = 3
|
||||
|
||||
DEFAULT_QUALITY = 32
|
||||
|
||||
@@ -4,6 +4,7 @@ from .i18n import set_current_language, t
|
||||
from .shows_screen import ShowsScreen
|
||||
from .inspect_details_screen import InspectDetailsScreen
|
||||
from .media_edit_screen import MediaEditScreen
|
||||
from .screen_support import configure_screen_log_handler, set_screen_log_pane_enabled
|
||||
|
||||
|
||||
class FfxApp(App):
|
||||
@@ -22,6 +23,13 @@ class FfxApp(App):
|
||||
# Data 'input' variable
|
||||
self.context = context
|
||||
set_current_language(self.context.get("language"))
|
||||
debug_mode = bool(self.context.get("debug", False))
|
||||
set_screen_log_pane_enabled(debug_mode)
|
||||
configure_screen_log_handler(
|
||||
self.context.get("logger"),
|
||||
self,
|
||||
enabled=debug_mode,
|
||||
)
|
||||
|
||||
|
||||
def on_mount(self) -> None:
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import os, click
|
||||
import os, click, subprocess
|
||||
from functools import lru_cache
|
||||
from logging import Logger
|
||||
|
||||
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
|
||||
@@ -61,6 +62,41 @@ class FfxController():
|
||||
sourceMediaDescriptor)
|
||||
|
||||
self.__logger: Logger = context['logger']
|
||||
self.__warnedH264Fallback = False
|
||||
|
||||
|
||||
@staticmethod
|
||||
@lru_cache(maxsize=None)
|
||||
def isFfmpegEncoderAvailable(encoderName: str) -> bool:
|
||||
completed = subprocess.run(
|
||||
["ffmpeg", "-encoders"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
if completed.returncode != 0:
|
||||
return False
|
||||
|
||||
resolvedEncoderName = str(encoderName).strip()
|
||||
|
||||
for line in completed.stdout.splitlines():
|
||||
if not line.startswith(" "):
|
||||
continue
|
||||
|
||||
tokens = line.split(maxsplit=2)
|
||||
if len(tokens) >= 2 and tokens[1] == resolvedEncoderName:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
@classmethod
|
||||
def getSupportedSoftwareH264Encoder(cls) -> str | None:
|
||||
if cls.isFfmpegEncoderAvailable("libx264"):
|
||||
return "libx264"
|
||||
if cls.isFfmpegEncoderAvailable("libopenh264"):
|
||||
return "libopenh264"
|
||||
return None
|
||||
|
||||
|
||||
def executeCommandSequence(self, commandSequence):
|
||||
@@ -79,10 +115,27 @@ class FfxController():
|
||||
|
||||
# -c:v libx264 -preset slow -crf 17
|
||||
def generateH264Tokens(self, quality, subIndex : int = 0):
|
||||
h264Encoder = self.getSupportedSoftwareH264Encoder()
|
||||
|
||||
return [f"-c:v:{int(subIndex)}", 'libx264',
|
||||
"-preset", "slow",
|
||||
'-crf', str(quality)]
|
||||
if h264Encoder == "libx264":
|
||||
return [f"-c:v:{int(subIndex)}", 'libx264',
|
||||
"-preset", "slow",
|
||||
'-crf', str(quality)]
|
||||
|
||||
if h264Encoder == "libopenh264":
|
||||
if not self.__warnedH264Fallback:
|
||||
self.__logger.warning(
|
||||
"libx264 encoder unavailable; falling back to libopenh264 for H.264 encoding."
|
||||
)
|
||||
self.__warnedH264Fallback = True
|
||||
|
||||
return [f"-c:v:{int(subIndex)}", 'libopenh264',
|
||||
'-pix_fmt', 'yuv420p']
|
||||
|
||||
raise click.ClickException(
|
||||
"H.264 encoding requested but no supported software H.264 encoder is available. "
|
||||
+ "Tried libx264 and libopenh264."
|
||||
)
|
||||
|
||||
|
||||
# -c:v:0 libvpx-vp9 -row-mt 1 -crf 32 -pass 1 -speed 4 -frame-parallel 0 -g 9999 -aq-mode 0
|
||||
|
||||
@@ -3,7 +3,7 @@ from textual.screen import Screen
|
||||
from textual.widgets import Footer, Placeholder
|
||||
|
||||
from .i18n import t
|
||||
from .screen_support import go_back_or_exit
|
||||
from .screen_support import build_screen_log_pane, go_back_or_exit
|
||||
|
||||
class HelpScreen(Screen):
|
||||
BINDINGS = [
|
||||
@@ -17,7 +17,15 @@ class HelpScreen(Screen):
|
||||
def compose(self) -> ComposeResult:
|
||||
# Row 1
|
||||
yield Placeholder(t("Help Screen"))
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
|
||||
def action_back(self):
|
||||
go_back_or_exit(self)
|
||||
|
||||
@@ -6,12 +6,23 @@ from .configuration_controller import ConfigurationController
|
||||
from .logging_utils import get_ffx_logger
|
||||
from .show_descriptor import ShowDescriptor
|
||||
|
||||
from enum import Enum
|
||||
|
||||
|
||||
class EmptyStringUndefined(Undefined):
|
||||
def __str__(self):
|
||||
return ''
|
||||
|
||||
|
||||
class LogLevel(Enum):
|
||||
|
||||
DEBUG = 'debug'
|
||||
INFO = 'info'
|
||||
WARNING = 'warning'
|
||||
ERROR = 'error'
|
||||
CRITICAL = 'critical'
|
||||
|
||||
|
||||
DIFF_ADDED_KEY = 'added'
|
||||
DIFF_REMOVED_KEY = 'removed'
|
||||
DIFF_CHANGED_KEY = 'changed'
|
||||
@@ -119,7 +130,7 @@ def setDiff(a : set, b : set) -> set:
|
||||
def permutateList(inputList: list, permutation: list):
|
||||
|
||||
# 0,1,2: ABC
|
||||
# 0,2,1: ACB
|
||||
# 0,2,1: ACBffmpeg:
|
||||
# 1,2,0: BCA
|
||||
|
||||
pass
|
||||
|
||||
@@ -8,6 +8,7 @@ from textual.widgets import Button, Footer, Header, Input, Static
|
||||
from textual.widgets._data_table import CellDoesNotExist
|
||||
|
||||
from ffx.file_properties import FileProperties
|
||||
from ffx.helper import DIFF_ADDED_KEY, DIFF_CHANGED_KEY, DIFF_REMOVED_KEY
|
||||
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
|
||||
from ffx.show_descriptor import ShowDescriptor
|
||||
from ffx.track_descriptor import TrackDescriptor
|
||||
@@ -18,6 +19,7 @@ from .pattern_details_screen import PatternDetailsScreen
|
||||
from .screen_support import (
|
||||
add_auto_table_column,
|
||||
build_screen_controllers,
|
||||
build_screen_log_pane,
|
||||
go_back_or_exit,
|
||||
localized_column_width,
|
||||
update_table_column_label,
|
||||
@@ -37,8 +39,8 @@ class InspectDetailsScreen(MediaWorkflowScreenBase):
|
||||
CSS = f"""
|
||||
|
||||
Grid {{
|
||||
grid-size: 6 11;
|
||||
grid-rows: 9 2 2 2 2 8 2 2 2 8 8;
|
||||
grid-size: 6 8;
|
||||
grid-rows: 9 2 2 2 2 10 2 10;
|
||||
grid-columns: {GRID_COLUMN_LABEL_MIN} {GRID_COLUMN_2} {GRID_COLUMN_3} {GRID_COLUMN_4} {GRID_COLUMN_5} {GRID_COLUMN_6};
|
||||
height: 100%;
|
||||
width: 100%;
|
||||
@@ -86,6 +88,10 @@ class InspectDetailsScreen(MediaWorkflowScreenBase):
|
||||
#differences-table {{
|
||||
row-span: 10;
|
||||
}}
|
||||
|
||||
.yellow {{
|
||||
tint: yellow 40%;
|
||||
}}
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
@@ -155,6 +161,7 @@ class InspectDetailsScreen(MediaWorkflowScreenBase):
|
||||
yield Static(" ")
|
||||
yield self.differencesTable
|
||||
|
||||
|
||||
# Row 2
|
||||
yield Static(" ", classes="five")
|
||||
|
||||
@@ -163,33 +170,31 @@ class InspectDetailsScreen(MediaWorkflowScreenBase):
|
||||
yield Button(t("Substitute"), id="pattern_button")
|
||||
yield Static(" ", classes="three")
|
||||
|
||||
|
||||
# Row 4
|
||||
yield Static(t("Pattern"))
|
||||
yield Input(type="text", id="pattern_input", classes="three")
|
||||
yield Static(" ")
|
||||
|
||||
|
||||
# Row 5
|
||||
yield Static(" ", classes="five")
|
||||
|
||||
# Row 6
|
||||
yield Static(t("Media Tags"))
|
||||
yield self.mediaTagsTable
|
||||
yield Static(" ", classes="two")
|
||||
yield Static(" ")
|
||||
|
||||
|
||||
# Row 7
|
||||
yield Static(" ", classes="five")
|
||||
|
||||
# Row 8
|
||||
yield Static(" ")
|
||||
yield Button(t("Set Default"), id="select_default_button")
|
||||
yield Button(t("Set Forced"), id="select_forced_button")
|
||||
yield Static(" ", classes="two")
|
||||
|
||||
# Row 9
|
||||
yield Static(t("Streams"))
|
||||
yield self.tracksTable
|
||||
yield Static(" ")
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
def _update_grid_layout(self) -> None:
|
||||
@@ -205,6 +210,30 @@ class InspectDetailsScreen(MediaWorkflowScreenBase):
|
||||
def action_back(self):
|
||||
go_back_or_exit(self)
|
||||
|
||||
def getDisplayedMediaDescriptor(self):
|
||||
if self._currentPattern is not None and self._targetMediaDescriptor is not None:
|
||||
return self._targetMediaDescriptor
|
||||
return self._sourceMediaDescriptor
|
||||
|
||||
def getTrackEditSourceDescriptor(self):
|
||||
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
|
||||
if (
|
||||
selectedTrackDescriptor is None
|
||||
or self._currentPattern is None
|
||||
or self._targetMediaDescriptor is None
|
||||
):
|
||||
return selectedTrackDescriptor
|
||||
|
||||
for sourceTrackDescriptor in self._sourceMediaDescriptor.getTrackDescriptors():
|
||||
if (
|
||||
sourceTrackDescriptor.getSourceIndex()
|
||||
== selectedTrackDescriptor.getSourceIndex()
|
||||
and sourceTrackDescriptor.getType() == selectedTrackDescriptor.getType()
|
||||
):
|
||||
return sourceTrackDescriptor
|
||||
|
||||
return None
|
||||
|
||||
def _build_shows_table(self):
|
||||
from textual.widgets import DataTable
|
||||
|
||||
@@ -287,6 +316,10 @@ class InspectDetailsScreen(MediaWorkflowScreenBase):
|
||||
self._update_show_header_labels()
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
self._update_grid_layout()
|
||||
|
||||
if self._currentPattern is None:
|
||||
@@ -476,8 +509,6 @@ class InspectDetailsScreen(MediaWorkflowScreenBase):
|
||||
self.updateDifferences()
|
||||
return updated
|
||||
|
||||
self.reloadProperties(reset_draft=True)
|
||||
|
||||
tagDifferences = self._mediaChangeSetObj.get(MediaDescriptorChangeSet.TAGS_KEY, {})
|
||||
for addedTagKey in tagDifferences.get(DIFF_ADDED_KEY, {}).keys():
|
||||
self._tac.deleteMediaTagByKey(self._currentPattern.getId(), addedTagKey)
|
||||
@@ -564,9 +595,6 @@ class InspectDetailsScreen(MediaWorkflowScreenBase):
|
||||
)
|
||||
|
||||
def handle_edit_pattern(self, screenResult):
|
||||
if not screenResult:
|
||||
return
|
||||
|
||||
self.reloadProperties(reset_draft=True)
|
||||
if self._currentPattern is not None:
|
||||
self.query_one("#pattern_input", Input).value = self._currentPattern.getPattern()
|
||||
|
||||
@@ -5,6 +5,7 @@ import os
|
||||
FFX_LOGGER_NAME = "FFX"
|
||||
CONSOLE_HANDLER_NAME = "ffx-console"
|
||||
FILE_HANDLER_NAME = "ffx-file"
|
||||
MUTED_CONSOLE_LEVEL = logging.CRITICAL + 1
|
||||
|
||||
|
||||
def get_ffx_logger(name: str = FFX_LOGGER_NAME) -> logging.Logger:
|
||||
@@ -66,3 +67,31 @@ def configure_ffx_logger(
|
||||
)
|
||||
|
||||
return logger
|
||||
|
||||
|
||||
def set_ffx_console_logging_enabled(
|
||||
logger: logging.Logger | None,
|
||||
*,
|
||||
enabled: bool,
|
||||
):
|
||||
if logger is None:
|
||||
return None
|
||||
|
||||
console_handler = next(
|
||||
(handler for handler in logger.handlers if handler.get_name() == CONSOLE_HANDLER_NAME),
|
||||
None,
|
||||
)
|
||||
if console_handler is None:
|
||||
return None
|
||||
|
||||
if enabled:
|
||||
saved_level = getattr(console_handler, "_ffx_saved_level", None)
|
||||
if saved_level is not None:
|
||||
console_handler.setLevel(saved_level)
|
||||
delattr(console_handler, "_ffx_saved_level")
|
||||
return console_handler
|
||||
|
||||
if not hasattr(console_handler, "_ffx_saved_level"):
|
||||
console_handler._ffx_saved_level = console_handler.level
|
||||
console_handler.setLevel(MUTED_CONSOLE_LEVEL)
|
||||
return console_handler
|
||||
|
||||
@@ -2,6 +2,7 @@ import os, re, click
|
||||
|
||||
from typing import List, Self
|
||||
|
||||
from ffx.attachment_format import AttachmentFormat
|
||||
from ffx.track_type import TrackType
|
||||
from ffx.iso_language import IsoLanguage
|
||||
|
||||
@@ -421,11 +422,11 @@ class MediaDescriptor:
|
||||
|
||||
if sourceMediaDescriptor:
|
||||
fontDescriptors = [ftd for ftd in sourceMediaDescriptor.getAttachmentTracks()
|
||||
if ftd.getCodec() == TrackCodec.TTF]
|
||||
if ftd.getAttachmentFormat() == AttachmentFormat.TTF]
|
||||
else:
|
||||
fontDescriptors = [ftd for ftd in self.__trackDescriptors
|
||||
if ftd.getType() == TrackType.ATTACHMENT
|
||||
and ftd.getCodec() == TrackCodec.TTF]
|
||||
and ftd.getAttachmentFormat() == AttachmentFormat.TTF]
|
||||
|
||||
for ad in sorted(fontDescriptors, key=lambda d: d.getIndex()):
|
||||
inputMappingTokens += ["-map", f"0:{ad.getIndex()}"]
|
||||
|
||||
@@ -203,7 +203,7 @@ class MediaDescriptorChangeSet():
|
||||
if (
|
||||
self.__applyMetadataNormalization
|
||||
and trackDescriptor is not None
|
||||
and trackDescriptor.getType() == TrackType.SUBTITLE
|
||||
and trackDescriptor.getType() in (TrackType.VIDEO, TrackType.AUDIO, TrackType.SUBTITLE)
|
||||
):
|
||||
trackTitle = str(normalizedTrackTags.get("title", "")).strip()
|
||||
fallbackTitle = str((fallbackTrackTags or {}).get("title", "")).strip()
|
||||
@@ -260,6 +260,8 @@ class MediaDescriptorChangeSet():
|
||||
# else:
|
||||
# dispositionTokens += [f"-disposition:{streamIndicator}:{subIndex}", '0']
|
||||
for ttd in self.__targetTrackDescriptors:
|
||||
if ttd.getType() == TrackType.ATTACHMENT:
|
||||
continue
|
||||
|
||||
targetDispositions = ttd.getDispositionSet()
|
||||
streamIndicator = ttd.getType().indicator()
|
||||
@@ -344,7 +346,7 @@ class MediaDescriptorChangeSet():
|
||||
for tagKey, tagValue in self.normalizeTrackTags(
|
||||
outputTrackTags,
|
||||
trackDescriptor=trackDescriptor,
|
||||
fallbackTrackTags=unchangedTrackTags | removedTrackTags,
|
||||
fallbackTrackTags=trackDescriptor.getTags(),
|
||||
).items():
|
||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
||||
+ f":{trackDescriptor.getSubIndex()}",
|
||||
@@ -366,6 +368,7 @@ class MediaDescriptorChangeSet():
|
||||
for tagKey, tagValue in self.normalizeTrackTags(
|
||||
preservedTrackTags,
|
||||
trackDescriptor=trackDescriptor,
|
||||
fallbackTrackTags=trackDescriptor.getTags(),
|
||||
).items():
|
||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
||||
+ f":{trackDescriptor.getSubIndex()}",
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
import os
|
||||
from time import monotonic
|
||||
|
||||
from textual import events, work
|
||||
from textual.containers import Grid
|
||||
from textual.worker import Worker, WorkerState
|
||||
from textual.widgets import Button, Footer, Header, Static
|
||||
|
||||
from ffx.metadata_editor import apply_metadata_edits
|
||||
@@ -9,11 +12,13 @@ from ffx.track_descriptor import TrackDescriptor
|
||||
from .i18n import t
|
||||
from .confirm_screen import ConfirmScreen
|
||||
from .media_workflow_screen_base import MediaWorkflowScreenBase
|
||||
from .screen_support import localized_column_width
|
||||
from .screen_support import build_screen_log_pane, localized_column_width
|
||||
from .tag_delete_screen import TagDeleteScreen
|
||||
from .tag_details_screen import TagDetailsScreen
|
||||
from .track_details_screen import TrackDetailsScreen
|
||||
|
||||
from .helper import LogLevel
|
||||
|
||||
|
||||
class MediaEditScreen(MediaWorkflowScreenBase):
|
||||
|
||||
@@ -169,14 +174,27 @@ class MediaEditScreen(MediaWorkflowScreenBase):
|
||||
yield Button(t("Quit"), id="quit_button")
|
||||
yield Static(" ")
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
self._update_grid_layout()
|
||||
self.updateMediaTags()
|
||||
self.updateTracks()
|
||||
self.updateDifferences()
|
||||
self.updateToggleButtons()
|
||||
self._applyChangesWorker = None
|
||||
|
||||
def on_screen_resume(self, _event: events.ScreenResume) -> None:
|
||||
if not hasattr(self, "tracksTable"):
|
||||
return
|
||||
|
||||
self.refreshAfterDraftChange()
|
||||
self.updateToggleButtons()
|
||||
|
||||
def _update_grid_layout(self) -> None:
|
||||
leftColumnWidth = max(
|
||||
@@ -195,6 +213,41 @@ class MediaEditScreen(MediaWorkflowScreenBase):
|
||||
if self._messageText:
|
||||
self.notify(self._messageText)
|
||||
|
||||
|
||||
def workerLoggingHandler(self,
|
||||
message: str,
|
||||
level: LogLevel = LogLevel.INFO) -> None:
|
||||
|
||||
if level == LogLevel.DEBUG:
|
||||
self.context["logger"].debug(str(message))
|
||||
elif level == LogLevel.INFO:
|
||||
self.context["logger"].info(str(message))
|
||||
elif level == LogLevel.WARNING:
|
||||
self.context["logger"].warning(str(message))
|
||||
elif level == LogLevel.ERROR:
|
||||
self.context["logger"].error(str(message))
|
||||
elif level == LogLevel.CRITICAL:
|
||||
self.context["logger"].critical(str(message))
|
||||
else:
|
||||
raise Exception(f"Undefined Logging Level (msg={message})")
|
||||
|
||||
|
||||
def _report_apply_timings(self, applyResult: dict, reloadSeconds: float = 0.0) -> None:
|
||||
timings = dict(applyResult.get("timings", {}))
|
||||
ffmpegSeconds = float(timings.get("ffmpeg_seconds", 0.0))
|
||||
replaceSeconds = float(timings.get("replace_seconds", 0.0))
|
||||
writeSeconds = float(timings.get("write_seconds", ffmpegSeconds + replaceSeconds))
|
||||
reloadSeconds = float(reloadSeconds)
|
||||
totalSeconds = writeSeconds + reloadSeconds
|
||||
|
||||
timingSummary = (
|
||||
f"ffx edit timings: ffmpeg={ffmpegSeconds:.2f}s "
|
||||
+ f"replace={replaceSeconds:.2f}s "
|
||||
+ f"reload={reloadSeconds:.2f}s "
|
||||
+ f"total={totalSeconds:.2f}s"
|
||||
)
|
||||
self.context["logger"].info(timingSummary)
|
||||
|
||||
def updateToggleButtons(self):
|
||||
self._set_toggle_button_state(
|
||||
"#cleanup_toggle_button",
|
||||
@@ -296,6 +349,7 @@ class MediaEditScreen(MediaWorkflowScreenBase):
|
||||
def action_toggle_normalization(self):
|
||||
self.setApplyNormalization(not self._applyNormalization)
|
||||
self.updateToggleButtons()
|
||||
self.updateTracks()
|
||||
self.updateDifferences()
|
||||
self.setMessage(
|
||||
t("Normalization enabled.")
|
||||
@@ -323,30 +377,35 @@ class MediaEditScreen(MediaWorkflowScreenBase):
|
||||
if trackDescriptor is None:
|
||||
return
|
||||
|
||||
updatedTracks = []
|
||||
nextSourceMediaDescriptor = self._sourceMediaDescriptor.clone(context=self.context)
|
||||
updatedTracks = nextSourceMediaDescriptor.getTrackDescriptors()
|
||||
replacementTrack = trackDescriptor.clone(context=self.context)
|
||||
replaced = False
|
||||
for currentTrack in self._sourceMediaDescriptor.getTrackDescriptors():
|
||||
if (
|
||||
currentTrack.getIndex() == trackDescriptor.getIndex()
|
||||
and currentTrack.getSubIndex() == trackDescriptor.getSubIndex()
|
||||
):
|
||||
updatedTracks.append(trackDescriptor)
|
||||
|
||||
for trackIndex, currentTrack in enumerate(updatedTracks):
|
||||
sameSourceTrack = (
|
||||
currentTrack.getSourceIndex() == replacementTrack.getSourceIndex()
|
||||
and currentTrack.getType() == replacementTrack.getType()
|
||||
)
|
||||
sameVisibleTrack = (
|
||||
currentTrack.getIndex() == replacementTrack.getIndex()
|
||||
and currentTrack.getSubIndex() == replacementTrack.getSubIndex()
|
||||
)
|
||||
if sameSourceTrack or sameVisibleTrack:
|
||||
updatedTracks[trackIndex] = replacementTrack
|
||||
replaced = True
|
||||
else:
|
||||
updatedTracks.append(currentTrack)
|
||||
break
|
||||
|
||||
if not replaced:
|
||||
self.setMessage(t("Unable to update selected stream."))
|
||||
return
|
||||
|
||||
self._sourceMediaDescriptor = self._sourceMediaDescriptor.clone(context=self.context)
|
||||
self._sourceMediaDescriptor.getTrackDescriptors().clear()
|
||||
self._sourceMediaDescriptor.getTrackDescriptors().extend(updatedTracks)
|
||||
self._sourceMediaDescriptor = nextSourceMediaDescriptor
|
||||
self.setMessage(
|
||||
t(
|
||||
"Updated stream #{index} ({track_type}).",
|
||||
index=trackDescriptor.getIndex(),
|
||||
track_type=t(trackDescriptor.getType().label()),
|
||||
index=replacementTrack.getIndex(),
|
||||
track_type=t(replacementTrack.getType().label()),
|
||||
)
|
||||
)
|
||||
self.refreshAfterDraftChange()
|
||||
@@ -356,33 +415,77 @@ class MediaEditScreen(MediaWorkflowScreenBase):
|
||||
self.setMessage(t("No changes to apply."))
|
||||
return
|
||||
|
||||
try:
|
||||
applyResult = apply_metadata_edits(
|
||||
self.context,
|
||||
self._mediaFilename,
|
||||
self._baselineMediaDescriptor,
|
||||
self._sourceMediaDescriptor,
|
||||
)
|
||||
except Exception as ex:
|
||||
self.context["logger"].exception(
|
||||
"Failed to apply metadata edits for %s",
|
||||
self._mediaFilename,
|
||||
)
|
||||
self.setMessage(t("Apply failed: {error}", error=ex))
|
||||
if self._applyChangesWorker is not None and self._applyChangesWorker.is_running:
|
||||
self.setMessage(t("Apply already running."))
|
||||
return
|
||||
|
||||
self.context["logger"].info(
|
||||
t("Starting metadata apply for {filename}.", filename=self._mediaFilename)
|
||||
)
|
||||
self._applyChangesWorker = self.run_apply_changes_worker()
|
||||
|
||||
@work(
|
||||
thread=True,
|
||||
exclusive=True,
|
||||
group="media-edit-apply",
|
||||
exit_on_error=False,
|
||||
)
|
||||
def run_apply_changes_worker(self):
|
||||
return apply_metadata_edits(
|
||||
self.context,
|
||||
self._mediaFilename,
|
||||
self._baselineMediaDescriptor,
|
||||
self._sourceMediaDescriptor,
|
||||
loggingHandler = self.workerLoggingHandler,
|
||||
)
|
||||
|
||||
def on_worker_state_changed(self, event: Worker.StateChanged) -> None:
|
||||
if event.worker is not self._applyChangesWorker:
|
||||
return
|
||||
|
||||
if event.state == WorkerState.ERROR:
|
||||
error = event.worker.error
|
||||
if error is not None:
|
||||
self.context["logger"].error(
|
||||
"Failed to apply metadata edits for %s",
|
||||
self._mediaFilename,
|
||||
exc_info=(type(error), error, error.__traceback__),
|
||||
)
|
||||
self.setMessage(t("Apply failed: {error}", error=error))
|
||||
self._applyChangesWorker = None
|
||||
return
|
||||
|
||||
if event.state != WorkerState.SUCCESS:
|
||||
return
|
||||
|
||||
applyResult = event.worker.result or {}
|
||||
|
||||
if applyResult.get("dry_run", False):
|
||||
self._report_apply_timings(applyResult, reloadSeconds=0.0)
|
||||
self.context["logger"].info(
|
||||
t(
|
||||
"Dry-run prepared temporary output {target_path}.",
|
||||
target_path=applyResult["target_path"],
|
||||
),
|
||||
)
|
||||
self.setMessage(
|
||||
t(
|
||||
"Dry-run: would rewrite via temporary file {target_path}",
|
||||
target_path=applyResult["target_path"],
|
||||
)
|
||||
)
|
||||
self._applyChangesWorker = None
|
||||
return
|
||||
|
||||
reloadStart = monotonic()
|
||||
self.context["logger"].info(t("Reloading file after metadata write."))
|
||||
self.reloadProperties(reset_draft=True)
|
||||
self.refreshAfterDraftChange()
|
||||
reloadSeconds = monotonic() - reloadStart
|
||||
self._report_apply_timings(applyResult, reloadSeconds=reloadSeconds)
|
||||
self.context["logger"].info(t("Changes applied and file reloaded."))
|
||||
self.setMessage(t("Changes applied and file reloaded."))
|
||||
self._applyChangesWorker = None
|
||||
|
||||
def action_revert_changes(self):
|
||||
if not self.hasPendingChanges():
|
||||
|
||||
@@ -9,6 +9,8 @@ from textual.widgets._data_table import CellDoesNotExist
|
||||
from ffx.audio_layout import AudioLayout
|
||||
from ffx.file_properties import FileProperties
|
||||
from ffx.helper import DIFF_ADDED_KEY, DIFF_CHANGED_KEY, DIFF_REMOVED_KEY
|
||||
from ffx.iso_language import IsoLanguage
|
||||
from ffx.media_descriptor import MediaDescriptor
|
||||
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
|
||||
from ffx.track_descriptor import TrackDescriptor
|
||||
from ffx.track_disposition import TrackDisposition
|
||||
@@ -123,6 +125,24 @@ class MediaWorkflowScreenBase(Screen):
|
||||
add_auto_table_column(self.differencesTable, t(self.DIFFERENCES_COLUMN_LABEL))
|
||||
self.differencesTable.cursor_type = "row"
|
||||
|
||||
def _track_codec_cell_value(self, trackDescriptor: TrackDescriptor) -> str:
|
||||
if trackDescriptor.getType() == TrackType.ATTACHMENT:
|
||||
return " "
|
||||
return trackDescriptor.getFormatDescriptor().label()
|
||||
|
||||
def _track_disposition_cell_value(
|
||||
self,
|
||||
trackDescriptor: TrackDescriptor,
|
||||
disposition: TrackDisposition,
|
||||
) -> str:
|
||||
if trackDescriptor.getType() == TrackType.ATTACHMENT:
|
||||
return " "
|
||||
return (
|
||||
t("Yes")
|
||||
if disposition in trackDescriptor.getDispositionSet()
|
||||
else t("No")
|
||||
)
|
||||
|
||||
def reloadProperties(self, reset_draft: bool = True):
|
||||
self._mediaFileProperties = FileProperties(self.context, self._mediaFilename)
|
||||
probedMediaDescriptor = self._mediaFileProperties.getMediaDescriptor()
|
||||
@@ -170,10 +190,17 @@ class MediaWorkflowScreenBase(Screen):
|
||||
def hasPendingChanges(self) -> bool:
|
||||
return bool(self._mediaChangeSetObj)
|
||||
|
||||
def getDisplayedMediaDescriptor(self) -> MediaDescriptor | None:
|
||||
return self._sourceMediaDescriptor
|
||||
|
||||
def getTrackEditSourceDescriptor(self) -> TrackDescriptor | None:
|
||||
return self.getSelectedTrackDescriptor()
|
||||
|
||||
def updateMediaTags(self):
|
||||
displayedMediaDescriptor = self.getDisplayedMediaDescriptor()
|
||||
self._sourceMediaTagRowData = populate_tag_table(
|
||||
self.mediaTagsTable,
|
||||
self._sourceMediaDescriptor.getTags(),
|
||||
displayedMediaDescriptor.getTags() if displayedMediaDescriptor is not None else {},
|
||||
ignore_keys=self._ignoreGlobalKeys,
|
||||
remove_keys=self._removeGlobalKeys,
|
||||
)
|
||||
@@ -183,8 +210,14 @@ class MediaWorkflowScreenBase(Screen):
|
||||
self._configure_tracks_table_columns()
|
||||
self._trackRowData = {}
|
||||
|
||||
trackDescriptorList = self._sourceMediaDescriptor.getTrackDescriptors()
|
||||
displayedMediaDescriptor = self.getDisplayedMediaDescriptor()
|
||||
trackDescriptorList = (
|
||||
displayedMediaDescriptor.getTrackDescriptors()
|
||||
if displayedMediaDescriptor is not None
|
||||
else []
|
||||
)
|
||||
typeCounter = {}
|
||||
applyNormalization = bool(getattr(self, "_applyNormalization", False))
|
||||
|
||||
for trackDescriptor in trackDescriptorList:
|
||||
trackType = trackDescriptor.getType()
|
||||
@@ -193,19 +226,34 @@ class MediaWorkflowScreenBase(Screen):
|
||||
|
||||
dispositionSet = trackDescriptor.getDispositionSet()
|
||||
audioLayout = trackDescriptor.getAudioLayout()
|
||||
trackTitle = trackDescriptor.getTitle()
|
||||
if (
|
||||
applyNormalization
|
||||
and not str(trackTitle).strip()
|
||||
and trackType in (TrackType.VIDEO, TrackType.AUDIO, TrackType.SUBTITLE)
|
||||
):
|
||||
trackLanguage = trackDescriptor.getLanguage()
|
||||
if trackLanguage != IsoLanguage.UNDEFINED:
|
||||
trackTitle = trackLanguage.label()
|
||||
row = (
|
||||
trackDescriptor.getIndex(),
|
||||
t(trackType.label()),
|
||||
typeCounter[trackType],
|
||||
trackDescriptor.getCodec().label(),
|
||||
self._track_codec_cell_value(trackDescriptor),
|
||||
t(audioLayout.label())
|
||||
if trackType == TrackType.AUDIO
|
||||
and audioLayout != AudioLayout.LAYOUT_UNDEFINED
|
||||
else " ",
|
||||
trackDescriptor.getLanguage().label(),
|
||||
trackDescriptor.getTitle(),
|
||||
t("Yes") if TrackDisposition.DEFAULT in dispositionSet else t("No"),
|
||||
t("Yes") if TrackDisposition.FORCED in dispositionSet else t("No"),
|
||||
trackTitle,
|
||||
self._track_disposition_cell_value(
|
||||
trackDescriptor,
|
||||
TrackDisposition.DEFAULT,
|
||||
),
|
||||
self._track_disposition_cell_value(
|
||||
trackDescriptor,
|
||||
TrackDisposition.FORCED,
|
||||
),
|
||||
)
|
||||
|
||||
row_key = self.tracksTable.add_row(*map(str, row))
|
||||
@@ -355,7 +403,7 @@ class MediaWorkflowScreenBase(Screen):
|
||||
return None
|
||||
|
||||
def setSelectedTrackDefault(self):
|
||||
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
|
||||
selectedTrackDescriptor = self.getTrackEditSourceDescriptor()
|
||||
if selectedTrackDescriptor is None:
|
||||
return False
|
||||
|
||||
@@ -366,7 +414,7 @@ class MediaWorkflowScreenBase(Screen):
|
||||
return True
|
||||
|
||||
def setSelectedTrackForced(self):
|
||||
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
|
||||
selectedTrackDescriptor = self.getTrackEditSourceDescriptor()
|
||||
if selectedTrackDescriptor is None:
|
||||
return False
|
||||
|
||||
|
||||
@@ -1,17 +1,23 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import click
|
||||
import os
|
||||
import tempfile
|
||||
from time import monotonic
|
||||
|
||||
from .constants import (
|
||||
DEFAULT_AC3_BANDWIDTH,
|
||||
DEFAULT_DTS_BANDWIDTH,
|
||||
DEFAULT_STEREO_BANDWIDTH,
|
||||
FFMPEG_COMMAND_TOKENS,
|
||||
)
|
||||
from .ffx_controller import FfxController
|
||||
from .media_descriptor import MediaDescriptor
|
||||
from .media_descriptor_change_set import MediaDescriptorChangeSet
|
||||
from .process import executeProcess, formatCommandSequence
|
||||
from .video_encoder import VideoEncoder
|
||||
|
||||
from .helper import LogLevel
|
||||
|
||||
|
||||
def create_temporary_output_path(source_path: str) -> str:
|
||||
sourceDirectory = os.path.dirname(os.path.abspath(source_path)) or "."
|
||||
@@ -49,41 +55,123 @@ def build_metadata_edit_context(context: dict) -> dict:
|
||||
return editContext
|
||||
|
||||
|
||||
def build_metadata_edit_command(
|
||||
context: dict,
|
||||
source_path: str,
|
||||
target_path: str,
|
||||
baseline_descriptor: MediaDescriptor,
|
||||
draft_descriptor: MediaDescriptor,
|
||||
) -> list[str]:
|
||||
changeSet = MediaDescriptorChangeSet(context, draft_descriptor, baseline_descriptor)
|
||||
|
||||
return (
|
||||
list(FFMPEG_COMMAND_TOKENS)
|
||||
+ ["-i", source_path, "-map", "0", "-c", "copy"]
|
||||
+ changeSet.generateMetadataTokens()
|
||||
+ changeSet.generateDispositionTokens()
|
||||
+ [target_path]
|
||||
)
|
||||
|
||||
|
||||
def notify_ffmpeg_invocation(
|
||||
context: dict,
|
||||
command_sequence: list[str],
|
||||
*,
|
||||
loggingHandler = None,
|
||||
dry_run: bool = False,
|
||||
) -> None:
|
||||
loggingCallback = loggingHandler or context.get("logging_handler")
|
||||
if not callable(loggingCallback):
|
||||
return
|
||||
|
||||
verbosity = int(context.get("verbosity", 0) or 0)
|
||||
if verbosity > 0:
|
||||
if dry_run:
|
||||
loggingCallback(f"ffmpeg dry-run: {formatCommandSequence(command_sequence)}", level = LogLevel.DEBUG)
|
||||
else:
|
||||
loggingCallback(f"ffmpeg: {formatCommandSequence(command_sequence)}", level = LogLevel.DEBUG)
|
||||
return
|
||||
|
||||
loggingCallback("ffmpeg dry-run prepared.") if dry_run else loggingCallback(
|
||||
"ffmpeg metadata write started."
|
||||
)
|
||||
|
||||
|
||||
def apply_metadata_edits(
|
||||
context: dict,
|
||||
source_path: str,
|
||||
baseline_descriptor: MediaDescriptor,
|
||||
draft_descriptor: MediaDescriptor,
|
||||
*,
|
||||
loggingHandler = None,
|
||||
) -> dict[str, object]:
|
||||
|
||||
temporaryOutputPath = create_temporary_output_path(source_path)
|
||||
|
||||
editContext = build_metadata_edit_context(context)
|
||||
controller = FfxController(editContext, draft_descriptor, baseline_descriptor)
|
||||
|
||||
commandSequence = build_metadata_edit_command(
|
||||
editContext,
|
||||
source_path,
|
||||
temporaryOutputPath,
|
||||
baseline_descriptor,
|
||||
draft_descriptor,
|
||||
)
|
||||
|
||||
ffmpegSeconds = 0.0
|
||||
replaceSeconds = 0.0
|
||||
|
||||
try:
|
||||
controller.runJob(
|
||||
source_path,
|
||||
temporaryOutputPath,
|
||||
targetFormat="",
|
||||
chainIteration=[],
|
||||
)
|
||||
|
||||
if editContext.get("dry_run", False):
|
||||
|
||||
notify_ffmpeg_invocation(
|
||||
editContext,
|
||||
commandSequence,
|
||||
loggingHandler = loggingHandler,
|
||||
dry_run=True,
|
||||
)
|
||||
|
||||
return {
|
||||
"applied": False,
|
||||
"dry_run": True,
|
||||
"target_path": temporaryOutputPath,
|
||||
"command_sequence": commandSequence,
|
||||
"timings": {
|
||||
"ffmpeg_seconds": ffmpegSeconds,
|
||||
"replace_seconds": replaceSeconds,
|
||||
"write_seconds": ffmpegSeconds + replaceSeconds,
|
||||
},
|
||||
}
|
||||
|
||||
notify_ffmpeg_invocation(editContext,
|
||||
commandSequence,
|
||||
loggingHandler = loggingHandler)
|
||||
|
||||
ffmpegStart = monotonic()
|
||||
_out, err, rc = executeProcess(commandSequence, context=editContext)
|
||||
ffmpegSeconds = monotonic() - ffmpegStart
|
||||
|
||||
if rc:
|
||||
raise click.ClickException(f"ffmpeg edit failed: rc={rc} error={err}")
|
||||
|
||||
replaceStart = monotonic()
|
||||
os.replace(temporaryOutputPath, source_path)
|
||||
replaceSeconds = monotonic() - replaceStart
|
||||
|
||||
return {
|
||||
"applied": True,
|
||||
"dry_run": False,
|
||||
"target_path": source_path,
|
||||
"command_sequence": commandSequence,
|
||||
"timings": {
|
||||
"ffmpeg_seconds": ffmpegSeconds,
|
||||
"replace_seconds": replaceSeconds,
|
||||
"write_seconds": ffmpegSeconds + replaceSeconds,
|
||||
},
|
||||
}
|
||||
|
||||
except Exception:
|
||||
if os.path.exists(temporaryOutputPath):
|
||||
os.remove(temporaryOutputPath)
|
||||
raise
|
||||
finally:
|
||||
if editContext.get("dry_run", False) and os.path.exists(temporaryOutputPath):
|
||||
os.remove(temporaryOutputPath)
|
||||
|
||||
@@ -4,6 +4,7 @@ from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
||||
|
||||
from .show import Base
|
||||
|
||||
from ffx.attachment_format import AttachmentFormat
|
||||
from ffx.track_type import TrackType
|
||||
|
||||
from ffx.iso_language import IsoLanguage
|
||||
@@ -132,9 +133,16 @@ class Track(Base):
|
||||
|
||||
if trackType in [t.label() for t in TrackType]:
|
||||
|
||||
if trackType == TrackType.ATTACHMENT.label():
|
||||
storedFormatIdentifier = AttachmentFormat.identifyFfprobeStream(streamObj).identifier()
|
||||
else:
|
||||
storedFormatIdentifier = TrackCodec.identify(
|
||||
streamObj.get(TrackDescriptor.FFPROBE_CODEC_KEY)
|
||||
).identifier()
|
||||
|
||||
return cls(pattern_id = patternId,
|
||||
track_type = trackType,
|
||||
codec_name = streamObj[TrackDescriptor.FFPROBE_CODEC_NAME_KEY],
|
||||
codec_name = storedFormatIdentifier,
|
||||
disposition_flags = sum([2**t.index() for (k,v) in streamObj[TrackDescriptor.FFPROBE_DISPOSITION_KEY].items()
|
||||
if v and (t := TrackDisposition.find(k)) is not None]),
|
||||
audio_layout = AudioLayout.identify(streamObj))
|
||||
@@ -153,8 +161,20 @@ class Track(Base):
|
||||
return TrackType.fromIndex(self.track_type)
|
||||
|
||||
def getCodec(self) -> TrackCodec:
|
||||
if self.getType() == TrackType.ATTACHMENT:
|
||||
return TrackCodec.UNKNOWN
|
||||
return TrackCodec.identify(self.codec_name)
|
||||
|
||||
def getAttachmentFormat(self) -> AttachmentFormat:
|
||||
if self.getType() != TrackType.ATTACHMENT:
|
||||
return AttachmentFormat.UNKNOWN
|
||||
return AttachmentFormat.identify(self.codec_name)
|
||||
|
||||
def getFormatDescriptor(self):
|
||||
if self.getType() == TrackType.ATTACHMENT:
|
||||
return self.getAttachmentFormat()
|
||||
return self.getCodec()
|
||||
|
||||
def getIndex(self):
|
||||
return int(self.index) if self.index is not None else -1
|
||||
|
||||
@@ -206,7 +226,10 @@ class Track(Base):
|
||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = subIndex
|
||||
|
||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = self.getType()
|
||||
kwargs[TrackDescriptor.CODEC_KEY] = self.getCodec()
|
||||
if self.getType() == TrackType.ATTACHMENT:
|
||||
kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY] = self.getAttachmentFormat()
|
||||
else:
|
||||
kwargs[TrackDescriptor.CODEC_KEY] = self.getCodec()
|
||||
|
||||
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = self.getDispositionSet()
|
||||
kwargs[TrackDescriptor.TAGS_KEY] = self.getTags()
|
||||
|
||||
@@ -134,7 +134,7 @@ class PatternController:
|
||||
def _build_track_row(self, trackDescriptor: TrackDescriptor) -> Track:
|
||||
track = Track(
|
||||
track_type=int(trackDescriptor.getType().index()),
|
||||
codec_name=str(trackDescriptor.getCodec().identifier()),
|
||||
codec_name=str(trackDescriptor.getFormatDescriptor().identifier()),
|
||||
index=int(trackDescriptor.getIndex()),
|
||||
source_index=int(trackDescriptor.getSourceIndex()),
|
||||
disposition_flags=int(
|
||||
|
||||
@@ -7,7 +7,7 @@ from textual.containers import Grid
|
||||
from .i18n import t
|
||||
from .show_controller import ShowController
|
||||
from .pattern_controller import PatternController
|
||||
from .screen_support import go_back_or_exit
|
||||
from .screen_support import build_screen_log_pane, go_back_or_exit
|
||||
|
||||
from ffx.model.pattern import Pattern
|
||||
|
||||
@@ -68,6 +68,10 @@ class PatternDeleteScreen(Screen):
|
||||
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
if self.__showDescriptor:
|
||||
self.query_one("#showlabel", Static).update(f"{self.__showDescriptor.getId()} - {self.__showDescriptor.getName()} ({self.__showDescriptor.getYear()})")
|
||||
if not self.__pattern is None:
|
||||
@@ -103,6 +107,7 @@ class PatternDeleteScreen(Screen):
|
||||
yield Button(t("Delete"), id="delete_button")
|
||||
yield Button(t("Cancel"), id="cancel_button")
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import click, re
|
||||
from typing import List
|
||||
|
||||
from textual import events
|
||||
from textual.screen import Screen
|
||||
from textual.widgets import Header, Footer, Static, Button, Input, DataTable, TextArea
|
||||
from textual.containers import Grid
|
||||
@@ -18,6 +19,7 @@ from .screen_support import (
|
||||
add_auto_table_column,
|
||||
build_screen_bootstrap,
|
||||
build_screen_controllers,
|
||||
build_screen_log_pane,
|
||||
go_back_or_exit,
|
||||
populate_tag_table,
|
||||
)
|
||||
@@ -173,7 +175,7 @@ class PatternDetailsScreen(Screen):
|
||||
row = (td.getIndex(),
|
||||
t(trackType.label()),
|
||||
typeCounter[trackType],
|
||||
td.getCodec().label(),
|
||||
td.getFormatDescriptor().label(),
|
||||
t(audioLayout.label()) if trackType == TrackType.AUDIO
|
||||
and audioLayout != AudioLayout.LAYOUT_UNDEFINED else ' ',
|
||||
trackLanguage.label() if trackLanguage != IsoLanguage.UNDEFINED else ' ',
|
||||
@@ -324,6 +326,9 @@ class PatternDetailsScreen(Screen):
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
if not self.__showDescriptor is None:
|
||||
self.query_one("#showlabel", Static).update(f"{self.__showDescriptor.getId()} - {self.__showDescriptor.getName()} ({self.__showDescriptor.getYear()})")
|
||||
|
||||
@@ -341,6 +346,16 @@ class PatternDetailsScreen(Screen):
|
||||
self.updateTracks()
|
||||
self.updateShiftedSeasons()
|
||||
|
||||
def on_screen_resume(self, _event: events.ScreenResume) -> None:
|
||||
if not hasattr(self, "tracksTable") or not hasattr(self, "tagsTable"):
|
||||
return
|
||||
|
||||
self.updateTags()
|
||||
self.updateTracks()
|
||||
|
||||
if self.__pattern is not None and hasattr(self, "shiftedSeasonsTable"):
|
||||
self.updateShiftedSeasons()
|
||||
|
||||
def compose(self):
|
||||
|
||||
|
||||
@@ -429,9 +444,9 @@ class PatternDetailsScreen(Screen):
|
||||
yield Static(" ")
|
||||
yield Static(" ")
|
||||
|
||||
yield Static(" ")
|
||||
yield Static(" ")
|
||||
yield Static(" ")
|
||||
yield Static(" ")
|
||||
yield Static(" ")
|
||||
yield Static(" ")
|
||||
|
||||
# Row 10
|
||||
yield self.shiftedSeasonsTable
|
||||
@@ -482,6 +497,7 @@ class PatternDetailsScreen(Screen):
|
||||
# Row 20
|
||||
yield Static(" ", classes="seven")
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
|
||||
|
||||
@@ -1,13 +1,18 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import weakref
|
||||
from collections.abc import Mapping
|
||||
from dataclasses import dataclass
|
||||
|
||||
from rich.cells import cell_len
|
||||
from rich.measure import measure_renderables
|
||||
from rich.text import Text
|
||||
from textual import events
|
||||
from textual.widgets import Collapsible, RichLog, Static
|
||||
|
||||
from .helper import formatRichColor
|
||||
from .i18n import t
|
||||
from .pattern_controller import PatternController
|
||||
from .show_controller import ShowController
|
||||
from .shifted_season_controller import ShiftedSeasonController
|
||||
@@ -16,6 +21,156 @@ from .tmdb_controller import TmdbController
|
||||
from .track_controller import TrackController
|
||||
|
||||
|
||||
SCREEN_LOG_PANE_ID = "screen_log_pane"
|
||||
SCREEN_LOG_VIEW_ID = "screen_log_view"
|
||||
SCREEN_LOG_RESIZE_HANDLE_ID = "screen_log_resize_handle"
|
||||
SCREEN_LOG_HANDLER_NAME = "ffx-screen-log"
|
||||
SCREEN_LOG_DEFAULT_HEIGHT = 8
|
||||
SCREEN_LOG_MIN_HEIGHT = 4
|
||||
SCREEN_LOG_COMPONENT_WIDTH = 16
|
||||
SCREEN_LOG_LEVEL_WIDTH = 8
|
||||
|
||||
_SCREEN_LOG_PANE_ENABLED = False
|
||||
|
||||
|
||||
class ScreenLogHandler(logging.Handler):
|
||||
"""Mirror logger output into the active screen log pane when available."""
|
||||
|
||||
def __init__(self, app) -> None:
|
||||
super().__init__(level=logging.DEBUG)
|
||||
self.set_name(SCREEN_LOG_HANDLER_NAME)
|
||||
self.set_app(app)
|
||||
|
||||
def set_app(self, app) -> None:
|
||||
self._app_ref = weakref.ref(app) if app is not None else lambda: None
|
||||
|
||||
def emit(self, record: logging.LogRecord) -> None:
|
||||
app = self._app_ref()
|
||||
if app is None:
|
||||
return
|
||||
|
||||
try:
|
||||
message = str(self.format(record)).strip()
|
||||
except Exception:
|
||||
self.handleError(record)
|
||||
return
|
||||
|
||||
if not message:
|
||||
return
|
||||
|
||||
try:
|
||||
app.call_from_thread(write_screen_log, app.screen, message)
|
||||
except RuntimeError:
|
||||
write_screen_log(app.screen, message)
|
||||
except Exception:
|
||||
self.handleError(record)
|
||||
|
||||
|
||||
class ScreenLogResizeHandle(Static):
|
||||
DEFAULT_CSS = """
|
||||
ScreenLogResizeHandle {
|
||||
width: 100%;
|
||||
height: 1;
|
||||
content-align: center middle;
|
||||
color: $text-muted;
|
||||
background: $panel-lighten-1;
|
||||
}
|
||||
|
||||
ScreenLogResizeHandle:hover {
|
||||
color: $text;
|
||||
background: $panel-lighten-2;
|
||||
}
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
super().__init__(" drag to resize ", id=SCREEN_LOG_RESIZE_HANDLE_ID)
|
||||
self._drag_active = False
|
||||
self._drag_origin_screen_y = 0
|
||||
self._drag_origin_height = SCREEN_LOG_DEFAULT_HEIGHT
|
||||
|
||||
def _get_log_pane(self):
|
||||
return self.parent.parent if self.parent is not None else None
|
||||
|
||||
def on_mouse_down(self, event: events.MouseDown) -> None:
|
||||
if event.button != 1:
|
||||
return
|
||||
|
||||
log_pane = self._get_log_pane()
|
||||
if log_pane is None:
|
||||
return
|
||||
|
||||
self._drag_active = True
|
||||
self._drag_origin_screen_y = event.screen_y
|
||||
self._drag_origin_height = log_pane.get_log_height()
|
||||
self.capture_mouse()
|
||||
event.stop()
|
||||
|
||||
def on_mouse_move(self, event: events.MouseMove) -> None:
|
||||
if not self._drag_active:
|
||||
return
|
||||
|
||||
log_pane = self._get_log_pane()
|
||||
if log_pane is None:
|
||||
return
|
||||
|
||||
next_height = self._drag_origin_height + (
|
||||
self._drag_origin_screen_y - event.screen_y
|
||||
)
|
||||
log_pane.set_log_height(next_height)
|
||||
event.stop()
|
||||
|
||||
def on_mouse_up(self, event: events.MouseUp) -> None:
|
||||
if not self._drag_active:
|
||||
return
|
||||
|
||||
self._drag_active = False
|
||||
self.release_mouse()
|
||||
event.stop()
|
||||
|
||||
|
||||
class ResizableScreenLogPane(Collapsible):
|
||||
def __init__(self) -> None:
|
||||
self._log_view = RichLog(
|
||||
id=SCREEN_LOG_VIEW_ID,
|
||||
wrap=True,
|
||||
markup=False,
|
||||
highlight=False,
|
||||
auto_scroll=True,
|
||||
)
|
||||
self._log_height = SCREEN_LOG_DEFAULT_HEIGHT
|
||||
self._apply_log_height()
|
||||
|
||||
super().__init__(
|
||||
ScreenLogResizeHandle(),
|
||||
self._log_view,
|
||||
title=t("Log"),
|
||||
collapsed=True,
|
||||
id=SCREEN_LOG_PANE_ID,
|
||||
)
|
||||
self.styles.width = "100%"
|
||||
|
||||
def _apply_log_height(self) -> None:
|
||||
self._log_view.styles.height = self._log_height
|
||||
self._log_view.styles.width = "100%"
|
||||
|
||||
def get_log_height(self) -> int:
|
||||
return int(self._log_height)
|
||||
|
||||
def set_log_height(self, height: int) -> None:
|
||||
next_height = max(SCREEN_LOG_MIN_HEIGHT, int(height))
|
||||
|
||||
try:
|
||||
available_height = int(self.app.size.height) - 8
|
||||
except Exception:
|
||||
available_height = next_height
|
||||
|
||||
if available_height > 0:
|
||||
next_height = min(next_height, available_height)
|
||||
|
||||
self._log_height = next_height
|
||||
self._apply_log_height()
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ScreenBootstrap:
|
||||
context: dict
|
||||
@@ -46,6 +201,48 @@ def build_screen_bootstrap(context: dict) -> ScreenBootstrap:
|
||||
)
|
||||
|
||||
|
||||
def set_screen_log_pane_enabled(enabled: bool) -> None:
|
||||
global _SCREEN_LOG_PANE_ENABLED
|
||||
_SCREEN_LOG_PANE_ENABLED = bool(enabled)
|
||||
|
||||
|
||||
def is_screen_log_pane_enabled() -> bool:
|
||||
return bool(_SCREEN_LOG_PANE_ENABLED)
|
||||
|
||||
|
||||
def configure_screen_log_handler(logger, app, *, enabled: bool):
|
||||
if logger is None:
|
||||
return None
|
||||
|
||||
screen_log_handler = next(
|
||||
(handler for handler in logger.handlers if handler.get_name() == SCREEN_LOG_HANDLER_NAME),
|
||||
None,
|
||||
)
|
||||
|
||||
if not enabled:
|
||||
if screen_log_handler is not None:
|
||||
logger.removeHandler(screen_log_handler)
|
||||
screen_log_handler.close()
|
||||
return None
|
||||
|
||||
if screen_log_handler is None:
|
||||
screen_log_handler = ScreenLogHandler(app)
|
||||
logger.addHandler(screen_log_handler)
|
||||
elif isinstance(screen_log_handler, ScreenLogHandler):
|
||||
screen_log_handler.set_app(app)
|
||||
|
||||
screen_log_handler.setLevel(logging.DEBUG)
|
||||
screen_log_handler.setFormatter(
|
||||
logging.Formatter(
|
||||
f"%(name)-{SCREEN_LOG_COMPONENT_WIDTH}s "
|
||||
+ f"%(levelname)-{SCREEN_LOG_LEVEL_WIDTH}s "
|
||||
+ "%(asctime)s | %(message)s",
|
||||
datefmt="%Y-%m-%d %H:%M:%S",
|
||||
)
|
||||
)
|
||||
return screen_log_handler
|
||||
|
||||
|
||||
def build_screen_controllers(
|
||||
context: dict,
|
||||
*,
|
||||
@@ -143,6 +340,48 @@ def update_table_column_label(table, column_key, label) -> None:
|
||||
table.refresh()
|
||||
|
||||
|
||||
def build_screen_log_pane() -> ResizableScreenLogPane | Static:
|
||||
"""Create a shared collapsible log pane for screen-local diagnostics."""
|
||||
|
||||
if not is_screen_log_pane_enabled():
|
||||
hidden = Static("", id=f"{SCREEN_LOG_PANE_ID}_disabled")
|
||||
hidden.display = False
|
||||
return hidden
|
||||
|
||||
return ResizableScreenLogPane()
|
||||
|
||||
|
||||
def toggle_screen_log_pane(screen) -> bool:
|
||||
"""Toggle the current screen log pane when present."""
|
||||
|
||||
try:
|
||||
logPane = screen.query_one(f"#{SCREEN_LOG_PANE_ID}", Collapsible)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
logPane.collapsed = not bool(logPane.collapsed)
|
||||
return True
|
||||
|
||||
|
||||
def write_screen_log(screen, message: str) -> bool:
|
||||
"""Append a line to the current screen log pane when present."""
|
||||
|
||||
if message is None:
|
||||
return False
|
||||
|
||||
text = str(message).strip()
|
||||
if not text:
|
||||
return False
|
||||
|
||||
try:
|
||||
logView = screen.query_one(f"#{SCREEN_LOG_VIEW_ID}", RichLog)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
logView.write(text)
|
||||
return True
|
||||
|
||||
|
||||
def go_back_or_exit(screen) -> None:
|
||||
"""Pop the current screen when possible, otherwise exit the app."""
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ from textual.screen import Screen
|
||||
from textual.widgets import Footer, Placeholder
|
||||
|
||||
from .i18n import t
|
||||
from .screen_support import go_back_or_exit
|
||||
from .screen_support import build_screen_log_pane, go_back_or_exit
|
||||
|
||||
|
||||
class SettingsScreen(Screen):
|
||||
@@ -17,7 +17,15 @@ class SettingsScreen(Screen):
|
||||
def compose(self) -> ComposeResult:
|
||||
# Row 1
|
||||
yield Placeholder(t("Settings Screen"))
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
|
||||
def action_back(self):
|
||||
go_back_or_exit(self)
|
||||
|
||||
@@ -6,7 +6,7 @@ from textual.containers import Grid
|
||||
|
||||
from .i18n import t
|
||||
from .shifted_season_controller import ShiftedSeasonController
|
||||
from .screen_support import go_back_or_exit
|
||||
from .screen_support import build_screen_log_pane, go_back_or_exit
|
||||
|
||||
from ffx.model.shifted_season import ShiftedSeason
|
||||
|
||||
@@ -67,6 +67,9 @@ class ShiftedSeasonDeleteScreen(Screen):
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
shiftedSeason: ShiftedSeason = self.__ssc.getShiftedSeason(self.__shiftedSeasonId)
|
||||
|
||||
ownerLabel = (
|
||||
@@ -127,6 +130,7 @@ class ShiftedSeasonDeleteScreen(Screen):
|
||||
yield Button(t("Delete"), id="delete_button")
|
||||
yield Button(t("Cancel"), id="cancel_button")
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ from textual.containers import Grid
|
||||
|
||||
from .i18n import t
|
||||
from .shifted_season_controller import ShiftedSeasonController
|
||||
from .screen_support import go_back_or_exit
|
||||
from .screen_support import build_screen_log_pane, go_back_or_exit
|
||||
|
||||
from ffx.model.shifted_season import ShiftedSeason
|
||||
|
||||
@@ -109,6 +109,9 @@ class ShiftedSeasonDetailsScreen(Screen):
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
if self.__shiftedSeasonId is not None:
|
||||
shiftedSeason: ShiftedSeason = self.__ssc.getShiftedSeason(self.__shiftedSeasonId)
|
||||
|
||||
@@ -175,6 +178,7 @@ class ShiftedSeasonDetailsScreen(Screen):
|
||||
# Row 10
|
||||
yield Static(" ", classes="three")
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ from textual.containers import Grid
|
||||
|
||||
from .i18n import t
|
||||
from .show_controller import ShowController
|
||||
from .screen_support import go_back_or_exit
|
||||
from .screen_support import build_screen_log_pane, go_back_or_exit
|
||||
|
||||
# Screen[dict[int, str, int]]
|
||||
class ShowDeleteScreen(Screen):
|
||||
@@ -89,6 +89,7 @@ class ShowDeleteScreen(Screen):
|
||||
yield Button(t("Cancel"), id="cancel_button")
|
||||
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
|
||||
@@ -108,5 +109,12 @@ class ShowDeleteScreen(Screen):
|
||||
if event.button.id == "cancel_button":
|
||||
self.app.pop_screen()
|
||||
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
|
||||
def action_back(self):
|
||||
go_back_or_exit(self)
|
||||
|
||||
@@ -21,6 +21,7 @@ from .screen_support import (
|
||||
add_auto_table_column,
|
||||
build_screen_bootstrap,
|
||||
build_screen_controllers,
|
||||
build_screen_log_pane,
|
||||
go_back_or_exit,
|
||||
)
|
||||
|
||||
@@ -174,6 +175,9 @@ class ShowDetailsScreen(Screen):
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
if self.__showDescriptor is not None:
|
||||
|
||||
showId = int(self.__showDescriptor.getId())
|
||||
@@ -433,6 +437,7 @@ class ShowDetailsScreen(Screen):
|
||||
yield Button(t("Cancel"), id="cancel_button")
|
||||
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
|
||||
|
||||
@@ -5,7 +5,12 @@ from rich.text import Text
|
||||
|
||||
from .i18n import t
|
||||
from .show_controller import ShowController
|
||||
from .screen_support import add_auto_table_column, go_back_or_exit, update_table_column_label
|
||||
from .screen_support import (
|
||||
add_auto_table_column,
|
||||
build_screen_log_pane,
|
||||
go_back_or_exit,
|
||||
update_table_column_label,
|
||||
)
|
||||
|
||||
from .show_details_screen import ShowDetailsScreen
|
||||
from .show_delete_screen import ShowDeleteScreen
|
||||
@@ -239,6 +244,10 @@ class ShowsScreen(Screen):
|
||||
|
||||
|
||||
def on_mount(self) -> None:
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
for show in self.__sc.getAllShows():
|
||||
self._add_show_row(show.getDescriptor(self.context))
|
||||
|
||||
@@ -278,4 +287,5 @@ class ShowsScreen(Screen):
|
||||
f = Footer()
|
||||
f.description = "yolo"
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield f
|
||||
|
||||
@@ -3,7 +3,7 @@ from textual.widgets import Header, Footer, Static, Button
|
||||
from textual.containers import Grid
|
||||
|
||||
from .i18n import t
|
||||
from .screen_support import go_back_or_exit
|
||||
from .screen_support import build_screen_log_pane, go_back_or_exit
|
||||
|
||||
|
||||
# Screen[dict[int, str, int]]
|
||||
@@ -64,6 +64,9 @@ class TagDeleteScreen(Screen):
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
self.query_one("#keylabel", Static).update(str(self.__key))
|
||||
self.query_one("#valuelabel", Static).update(str(self.__value))
|
||||
|
||||
@@ -92,6 +95,7 @@ class TagDeleteScreen(Screen):
|
||||
yield Button(t("Delete"), id="delete_button")
|
||||
yield Button(t("Cancel"), id="cancel_button")
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ from textual.widgets import Header, Footer, Static, Button, Input
|
||||
from textual.containers import Grid
|
||||
|
||||
from .i18n import t
|
||||
from .screen_support import go_back_or_exit
|
||||
from .screen_support import build_screen_log_pane, go_back_or_exit
|
||||
|
||||
|
||||
# Screen[dict[int, str, int]]
|
||||
@@ -87,6 +87,9 @@ class TagDetailsScreen(Screen):
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
if self.__key is not None:
|
||||
self.query_one("#key_input", Input).value = str(self.__key)
|
||||
|
||||
@@ -121,6 +124,7 @@ class TagDetailsScreen(Screen):
|
||||
# Row 6
|
||||
yield Static(" ", classes="five", id="messagestatic")
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer(id="footer")
|
||||
|
||||
|
||||
|
||||
@@ -3,20 +3,22 @@ from enum import Enum
|
||||
|
||||
class TrackCodec(Enum):
|
||||
|
||||
H265 = {'identifier': 'hevc', 'format': 'h265', 'extension': 'h265' ,'label': 'H.265'}
|
||||
VP9 = {'identifier': 'vp9', 'format': 'ivf', 'extension': 'ivf' , 'label': 'VP9'}
|
||||
H265 = {'identifier': 'hevc', 'format': None, 'extension': 'h265' ,'label': 'H.265'}
|
||||
H264 = {'identifier': 'h264', 'format': 'h264', 'extension': 'h264' ,'label': 'H.264'}
|
||||
MPEG4 = {'identifier': 'mpeg4', 'format': 'm4v', 'extension': 'm4v' ,'label': 'MPEG-4'}
|
||||
MPEG2 = {'identifier': 'mpeg2video', 'format': 'mpeg2video', 'extension': 'mpg' ,'label': 'MPEG-2'}
|
||||
|
||||
OPUS = {'identifier': 'opus', 'format': 'opus', 'extension': 'opus' , 'label': 'Opus'}
|
||||
AAC = {'identifier': 'aac', 'format': None, 'extension': 'aac' , 'label': 'AAC'}
|
||||
AC3 = {'identifier': 'ac3', 'format': 'ac3', 'extension': 'ac3' , 'label': 'AC3'}
|
||||
EAC3 = {'identifier': 'eac3', 'format': 'eac3', 'extension': 'eac3' , 'label': 'EAC3'}
|
||||
DTS = {'identifier': 'dts', 'format': 'dts', 'extension': 'dts' , 'label': 'DTS'}
|
||||
MP3 = {'identifier': 'mp3', 'format': 'mp3', 'extension': 'mp3' , 'label': 'MP3'}
|
||||
|
||||
WEBVTT = {'identifier': 'webvtt', 'format': 'webvtt', 'extension': 'vtt' , 'label': 'WebVTT'}
|
||||
SRT = {'identifier': 'subrip', 'format': 'srt', 'extension': 'srt' , 'label': 'SRT'}
|
||||
ASS = {'identifier': 'ass', 'format': 'ass', 'extension': 'ass' , 'label': 'ASS'}
|
||||
TTF = {'identifier': 'ttf', 'format': None, 'extension': 'ttf' , 'label': 'TTF'}
|
||||
PGS = {'identifier': 'hdmv_pgs_subtitle', 'format': 'sup', 'extension': 'sup' , 'label': 'PGS'}
|
||||
VOBSUB = {'identifier': 'dvd_subtitle', 'format': None, 'extension': 'mkv' , 'label': 'VobSub'}
|
||||
|
||||
|
||||
@@ -43,7 +43,7 @@ class TrackController():
|
||||
s = self.Session()
|
||||
track = Track(pattern_id = patId,
|
||||
track_type = int(trackDescriptor.getType().index()),
|
||||
codec_name = str(trackDescriptor.getCodec().identifier()),
|
||||
codec_name = str(trackDescriptor.getFormatDescriptor().identifier()),
|
||||
index = int(trackDescriptor.getIndex()),
|
||||
source_index = int(trackDescriptor.getSourceIndex()),
|
||||
disposition_flags = int(TrackDisposition.toFlags(trackDescriptor.getDispositionSet())),
|
||||
@@ -82,7 +82,7 @@ class TrackController():
|
||||
track.index = int(trackDescriptor.getIndex())
|
||||
|
||||
track.track_type = int(trackDescriptor.getType().index())
|
||||
track.codec_name = str(trackDescriptor.getCodec().identifier())
|
||||
track.codec_name = str(trackDescriptor.getFormatDescriptor().identifier())
|
||||
track.audio_layout = int(trackDescriptor.getAudioLayout().index())
|
||||
|
||||
track.disposition_flags = int(TrackDisposition.toFlags(trackDescriptor.getDispositionSet()))
|
||||
|
||||
@@ -6,7 +6,7 @@ from textual.containers import Grid
|
||||
|
||||
from ffx.track_descriptor import TrackDescriptor
|
||||
from .i18n import t
|
||||
from .screen_support import go_back_or_exit
|
||||
from .screen_support import build_screen_log_pane, go_back_or_exit
|
||||
|
||||
|
||||
# Screen[dict[int, str, int]]
|
||||
@@ -67,6 +67,9 @@ class TrackDeleteScreen(Screen):
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
self.query_one("#subindexlabel", Static).update(str(self.__trackDescriptor.getSubIndex()))
|
||||
self.query_one("#patternlabel", Static).update(str(self.__trackDescriptor.getPatternId()))
|
||||
self.query_one("#languagelabel", Static).update(str(self.__trackDescriptor.getLanguage().label()))
|
||||
@@ -118,6 +121,7 @@ class TrackDeleteScreen(Screen):
|
||||
yield Button(t("Delete"), id="delete_button")
|
||||
yield Button(t("Cancel"), id="cancel_button")
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer()
|
||||
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from typing import Self
|
||||
|
||||
from .attachment_format import AttachmentFormat
|
||||
from .iso_language import IsoLanguage
|
||||
from .track_type import TrackType
|
||||
from .audio_layout import AudioLayout
|
||||
@@ -26,6 +27,7 @@ class TrackDescriptor:
|
||||
|
||||
TRACK_TYPE_KEY = "track_type"
|
||||
CODEC_KEY = "codec_name"
|
||||
ATTACHMENT_FORMAT_KEY = "attachment_format"
|
||||
AUDIO_LAYOUT_KEY = "audio_layout"
|
||||
|
||||
FFPROBE_INDEX_KEY = "index"
|
||||
@@ -110,15 +112,6 @@ class TrackDescriptor:
|
||||
else:
|
||||
self.__trackType = TrackType.UNKNOWN
|
||||
|
||||
if TrackDescriptor.CODEC_KEY in kwargs.keys():
|
||||
if type(kwargs[TrackDescriptor.CODEC_KEY]) is not TrackCodec:
|
||||
raise TypeError(
|
||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.CODEC_KEY} is required to be of type TrackCodec"
|
||||
)
|
||||
self.__trackCodec = kwargs[TrackDescriptor.CODEC_KEY]
|
||||
else:
|
||||
self.__trackCodec = TrackCodec.UNKNOWN
|
||||
|
||||
if TrackDescriptor.TAGS_KEY in kwargs.keys():
|
||||
if type(kwargs[TrackDescriptor.TAGS_KEY]) is not dict:
|
||||
raise TypeError(
|
||||
@@ -151,6 +144,34 @@ class TrackDescriptor:
|
||||
else:
|
||||
self.__audioLayout = AudioLayout.LAYOUT_UNDEFINED
|
||||
|
||||
self.__trackCodec = TrackCodec.UNKNOWN
|
||||
self.__attachmentFormat = AttachmentFormat.UNKNOWN
|
||||
|
||||
if self.__trackType == TrackType.ATTACHMENT:
|
||||
if TrackDescriptor.ATTACHMENT_FORMAT_KEY in kwargs.keys():
|
||||
if type(kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY]) is not AttachmentFormat:
|
||||
raise TypeError(
|
||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.ATTACHMENT_FORMAT_KEY} is required to be of type AttachmentFormat"
|
||||
)
|
||||
self.__attachmentFormat = kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY]
|
||||
elif TrackDescriptor.CODEC_KEY in kwargs.keys():
|
||||
legacyCodec = kwargs[TrackDescriptor.CODEC_KEY]
|
||||
if type(legacyCodec) is AttachmentFormat:
|
||||
self.__attachmentFormat = legacyCodec
|
||||
elif type(legacyCodec) is TrackCodec:
|
||||
self.__attachmentFormat = AttachmentFormat.fromTrackCodec(legacyCodec)
|
||||
else:
|
||||
raise TypeError(
|
||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.CODEC_KEY} is required to be of type TrackCodec for legacy attachment compatibility"
|
||||
)
|
||||
else:
|
||||
if TrackDescriptor.CODEC_KEY in kwargs.keys():
|
||||
if type(kwargs[TrackDescriptor.CODEC_KEY]) is not TrackCodec:
|
||||
raise TypeError(
|
||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.CODEC_KEY} is required to be of type TrackCodec"
|
||||
)
|
||||
self.__trackCodec = kwargs[TrackDescriptor.CODEC_KEY]
|
||||
|
||||
@classmethod
|
||||
def fromFfprobe(cls, streamObj, subIndex: int = -1):
|
||||
"""Processes ffprobe stream data as array with elements according to the following example
|
||||
@@ -215,7 +236,12 @@ class TrackDescriptor:
|
||||
|
||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = trackType
|
||||
|
||||
kwargs[TrackDescriptor.CODEC_KEY] = TrackCodec.identify(streamObj[TrackDescriptor.FFPROBE_CODEC_KEY])
|
||||
if trackType == TrackType.ATTACHMENT:
|
||||
kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY] = AttachmentFormat.identifyFfprobeStream(streamObj)
|
||||
else:
|
||||
kwargs[TrackDescriptor.CODEC_KEY] = TrackCodec.identify(
|
||||
streamObj.get(TrackDescriptor.FFPROBE_CODEC_KEY)
|
||||
)
|
||||
|
||||
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = (
|
||||
{
|
||||
@@ -277,6 +303,14 @@ class TrackDescriptor:
|
||||
def getCodec(self) -> TrackCodec:
|
||||
return self.__trackCodec
|
||||
|
||||
def getAttachmentFormat(self) -> AttachmentFormat:
|
||||
return self.__attachmentFormat
|
||||
|
||||
def getFormatDescriptor(self):
|
||||
if self.__trackType == TrackType.ATTACHMENT:
|
||||
return self.__attachmentFormat
|
||||
return self.__trackCodec
|
||||
|
||||
def getLanguage(self):
|
||||
if "language" in self.__trackTags.keys():
|
||||
return IsoLanguage.findThreeLetter(self.__trackTags["language"])
|
||||
@@ -353,12 +387,16 @@ class TrackDescriptor:
|
||||
TrackDescriptor.SOURCE_INDEX_KEY: int(self.__sourceIndex),
|
||||
TrackDescriptor.SUB_INDEX_KEY: int(self.__subIndex),
|
||||
TrackDescriptor.TRACK_TYPE_KEY: self.__trackType,
|
||||
TrackDescriptor.CODEC_KEY: self.__trackCodec,
|
||||
TrackDescriptor.TAGS_KEY: dict(self.__trackTags),
|
||||
TrackDescriptor.DISPOSITION_SET_KEY: set(self.__dispositionSet),
|
||||
TrackDescriptor.AUDIO_LAYOUT_KEY: self.__audioLayout,
|
||||
}
|
||||
|
||||
if self.__trackType == TrackType.ATTACHMENT:
|
||||
kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY] = self.__attachmentFormat
|
||||
else:
|
||||
kwargs[TrackDescriptor.CODEC_KEY] = self.__trackCodec
|
||||
|
||||
if context is not None:
|
||||
kwargs[TrackDescriptor.CONTEXT_KEY] = context
|
||||
elif self.__context:
|
||||
|
||||
@@ -5,6 +5,7 @@ from textual.widgets import Header, Footer, Static, Button, SelectionList, Selec
|
||||
from textual.containers import Grid
|
||||
from textual.widgets._data_table import CellDoesNotExist
|
||||
|
||||
from .attachment_format import AttachmentFormat
|
||||
from .audio_layout import AudioLayout
|
||||
from .iso_language import IsoLanguage
|
||||
from .tag_delete_screen import TagDeleteScreen
|
||||
@@ -14,7 +15,13 @@ from .track_descriptor import TrackDescriptor
|
||||
from .track_disposition import TrackDisposition
|
||||
from .track_type import TrackType
|
||||
from .i18n import t
|
||||
from .screen_support import add_auto_table_column, build_screen_bootstrap, go_back_or_exit, populate_tag_table
|
||||
from .screen_support import (
|
||||
add_auto_table_column,
|
||||
build_screen_bootstrap,
|
||||
build_screen_log_pane,
|
||||
go_back_or_exit,
|
||||
populate_tag_table,
|
||||
)
|
||||
|
||||
|
||||
class TrackDetailsScreen(Screen):
|
||||
@@ -128,10 +135,14 @@ class TrackDetailsScreen(Screen):
|
||||
self.__patternLabel = str(patternLabel)
|
||||
self.__siblingTrackDescriptors = list(siblingTrackDescriptors or [])
|
||||
self.__metadataOnly = bool(metadata_only)
|
||||
self.__applyNormalization = bool(
|
||||
self.context.get("apply_metadata_normalization", True)
|
||||
)
|
||||
|
||||
if self.__isNew:
|
||||
self.__trackType = trackType
|
||||
self.__trackCodec = TrackCodec.UNKNOWN
|
||||
self.__attachmentFormat = AttachmentFormat.UNKNOWN
|
||||
self.__audioLayout = AudioLayout.LAYOUT_UNDEFINED
|
||||
self.__index = index
|
||||
self.__subIndex = subIndex
|
||||
@@ -141,6 +152,7 @@ class TrackDetailsScreen(Screen):
|
||||
else:
|
||||
self.__trackType = trackDescriptor.getType()
|
||||
self.__trackCodec = trackDescriptor.getCodec()
|
||||
self.__attachmentFormat = trackDescriptor.getAttachmentFormat()
|
||||
self.__audioLayout = trackDescriptor.getAudioLayout()
|
||||
self.__index = trackDescriptor.getIndex()
|
||||
self.__subIndex = trackDescriptor.getSubIndex()
|
||||
@@ -152,8 +164,13 @@ class TrackDetailsScreen(Screen):
|
||||
initial_language = trackDescriptor.getLanguage()
|
||||
initial_title = trackDescriptor.getTitle()
|
||||
|
||||
self.__titleAutoManaged = (
|
||||
initial_language == IsoLanguage.UNDEFINED and not str(initial_title).strip()
|
||||
initialTitleEmpty = not str(initial_title).strip()
|
||||
self.__titleAutoManaged = bool(
|
||||
initialTitleEmpty
|
||||
and (
|
||||
initial_language == IsoLanguage.UNDEFINED
|
||||
or (self.__metadataOnly and self.__applyNormalization)
|
||||
)
|
||||
)
|
||||
self.__suppressTitleChanged = False
|
||||
self.__lastAutoTitle = ""
|
||||
@@ -222,6 +239,9 @@ class TrackDetailsScreen(Screen):
|
||||
|
||||
def on_mount(self):
|
||||
|
||||
if getattr(self, 'context', {}).get('debug', False):
|
||||
self.title = f"{self.app.title} - {self.__class__.__name__}"
|
||||
|
||||
self.query_one("#index_label", Static).update(
|
||||
str(self.__index) if self.__index is not None else "-"
|
||||
)
|
||||
@@ -256,6 +276,8 @@ class TrackDetailsScreen(Screen):
|
||||
self.__trackDescriptor.getLanguage()
|
||||
)
|
||||
self.query_one("#title_input", Input).value = self.__trackDescriptor.getTitle()
|
||||
if self.__titleAutoManaged and not self.__trackDescriptor.getTitle().strip():
|
||||
self._apply_auto_title_for_language(self.__trackDescriptor.getLanguage())
|
||||
self.updateTags()
|
||||
|
||||
if self.__metadataOnly:
|
||||
@@ -387,6 +409,7 @@ class TrackDetailsScreen(Screen):
|
||||
# Row 24
|
||||
yield Static(" ", classes="five", id="messagestatic")
|
||||
|
||||
yield build_screen_log_pane()
|
||||
yield Footer(id="footer")
|
||||
|
||||
def getTrackDescriptorFromInput(self):
|
||||
@@ -413,7 +436,10 @@ class TrackDetailsScreen(Screen):
|
||||
if not isinstance(selectedTrackType, TrackType):
|
||||
selectedTrackType = TrackType.UNKNOWN
|
||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = selectedTrackType
|
||||
kwargs[TrackDescriptor.CODEC_KEY] = self.__trackCodec
|
||||
if selectedTrackType == TrackType.ATTACHMENT:
|
||||
kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY] = self.__attachmentFormat
|
||||
else:
|
||||
kwargs[TrackDescriptor.CODEC_KEY] = self.__trackCodec
|
||||
|
||||
if selectedTrackType == TrackType.AUDIO:
|
||||
selectedAudioLayout = self.query_one("#audio_layout_select", Select).value
|
||||
|
||||
@@ -7,6 +7,7 @@ import os
|
||||
from pathlib import Path
|
||||
import subprocess
|
||||
import sys
|
||||
from functools import lru_cache
|
||||
from typing import Mapping
|
||||
|
||||
|
||||
@@ -95,8 +96,69 @@ def write_vtt(path: Path, lines: tuple[str, ...]) -> Path:
|
||||
return path
|
||||
|
||||
|
||||
def create_source_fixture(workdir: Path, filename: str, tracks: list[SourceTrackSpec], duration_seconds: int = 1) -> Path:
|
||||
@lru_cache(maxsize=None)
|
||||
def _ffmpeg_encoder_is_available(encoder_name: str) -> bool:
|
||||
completed = subprocess.run(
|
||||
["ffmpeg", "-encoders"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
if completed.returncode != 0:
|
||||
return False
|
||||
|
||||
encoder_label = str(encoder_name).strip()
|
||||
for line in completed.stdout.splitlines():
|
||||
if not line.startswith(" "):
|
||||
continue
|
||||
|
||||
tokens = line.split(maxsplit=2)
|
||||
if len(tokens) >= 2 and tokens[1] == encoder_label:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def _resolve_fixture_video_encoder(
|
||||
video_encoder: str,
|
||||
video_encoder_options: tuple[str, ...],
|
||||
) -> tuple[str, tuple[str, ...]]:
|
||||
if video_encoder != "libx264":
|
||||
return video_encoder, video_encoder_options
|
||||
|
||||
if _ffmpeg_encoder_is_available("libx264"):
|
||||
return video_encoder, video_encoder_options
|
||||
|
||||
if _ffmpeg_encoder_is_available("libopenh264"):
|
||||
# Keep fixture generation software-based when libx264 is missing.
|
||||
return "libopenh264", ("-pix_fmt", "yuv420p")
|
||||
|
||||
return video_encoder, video_encoder_options
|
||||
|
||||
|
||||
def create_source_fixture(
|
||||
workdir: Path,
|
||||
filename: str,
|
||||
tracks: list[SourceTrackSpec],
|
||||
duration_seconds: int = 1,
|
||||
*,
|
||||
video_encoder: str = "libx264",
|
||||
video_encoder_options: tuple[str, ...] = (
|
||||
"-preset",
|
||||
"ultrafast",
|
||||
"-crf",
|
||||
"35",
|
||||
"-pix_fmt",
|
||||
"yuv420p",
|
||||
),
|
||||
audio_encoder: str = "aac",
|
||||
audio_encoder_options: tuple[str, ...] = ("-b:a", "48k"),
|
||||
subtitle_encoder: str = "webvtt",
|
||||
) -> Path:
|
||||
output_path = workdir / filename
|
||||
video_encoder, video_encoder_options = _resolve_fixture_video_encoder(
|
||||
video_encoder,
|
||||
video_encoder_options,
|
||||
)
|
||||
|
||||
has_video = any(track.track_type == TrackType.VIDEO for track in tracks)
|
||||
has_audio = any(track.track_type == TrackType.AUDIO for track in tracks)
|
||||
@@ -189,21 +251,16 @@ def create_source_fixture(workdir: Path, filename: str, tracks: list[SourceTrack
|
||||
command += map_tokens
|
||||
command += metadata_tokens
|
||||
command += disposition_tokens
|
||||
if has_video:
|
||||
command += ["-c:v", video_encoder] + list(video_encoder_options)
|
||||
|
||||
if has_audio:
|
||||
command += ["-c:a", audio_encoder] + list(audio_encoder_options)
|
||||
|
||||
if subtitle_input_indices:
|
||||
command += ["-c:s", subtitle_encoder]
|
||||
|
||||
command += [
|
||||
"-c:v",
|
||||
"libx264",
|
||||
"-preset",
|
||||
"ultrafast",
|
||||
"-crf",
|
||||
"35",
|
||||
"-pix_fmt",
|
||||
"yuv420p",
|
||||
"-c:a",
|
||||
"aac",
|
||||
"-b:a",
|
||||
"48k",
|
||||
"-c:s",
|
||||
"webvtt",
|
||||
"-t",
|
||||
str(duration_seconds),
|
||||
"-shortest",
|
||||
|
||||
@@ -168,6 +168,40 @@ class CliLazyImportTests(unittest.TestCase):
|
||||
result["modules"],
|
||||
)
|
||||
|
||||
def test_root_debug_flag_parses_without_loading_runtime_modules(self):
|
||||
result = self.run_python(
|
||||
textwrap.dedent(
|
||||
f"""
|
||||
import json
|
||||
import sys
|
||||
|
||||
sys.path.insert(0, {str(SRC_ROOT)!r})
|
||||
|
||||
import ffx.cli
|
||||
|
||||
context = ffx.cli.ffx.make_context(
|
||||
"ffx",
|
||||
["--debug", "help"],
|
||||
resilient_parsing=True,
|
||||
)
|
||||
|
||||
print(json.dumps({{
|
||||
"debug": context.params["debug"],
|
||||
"modules": {{
|
||||
module_name: module_name in sys.modules
|
||||
for module_name in {HEAVY_MODULES!r}
|
||||
}},
|
||||
}}))
|
||||
"""
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(result["debug"])
|
||||
self.assertTrue(
|
||||
all(not is_loaded for is_loaded in result["modules"].values()),
|
||||
result["modules"],
|
||||
)
|
||||
|
||||
def test_convert_cut_option_supports_flag_duration_and_start_duration_forms(self):
|
||||
result = self.run_python(
|
||||
textwrap.dedent(
|
||||
|
||||
89
tests/unit/test_cli_unmux_sequence.py
Normal file
89
tests/unit/test_cli_unmux_sequence.py
Normal file
@@ -0,0 +1,89 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
import sys
|
||||
import unittest
|
||||
|
||||
|
||||
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
|
||||
|
||||
if str(SRC_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(SRC_ROOT))
|
||||
|
||||
|
||||
from ffx import cli # noqa: E402
|
||||
from ffx.track_codec import TrackCodec # noqa: E402
|
||||
from ffx.track_descriptor import TrackDescriptor # noqa: E402
|
||||
from ffx.track_type import TrackType # noqa: E402
|
||||
|
||||
|
||||
class UnmuxSequenceTests(unittest.TestCase):
|
||||
def test_h265_video_unmux_uses_annex_b_bitstream_filter_without_forced_format(self):
|
||||
track_descriptor = TrackDescriptor(
|
||||
index=0,
|
||||
sub_index=0,
|
||||
track_type=TrackType.VIDEO,
|
||||
codec_name=TrackCodec.H265,
|
||||
tags={},
|
||||
disposition_set=set(),
|
||||
)
|
||||
|
||||
sequence = cli.getUnmuxSequence(
|
||||
track_descriptor,
|
||||
"input.mp4",
|
||||
"episode_0_eng",
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
[
|
||||
"ffmpeg",
|
||||
"-y",
|
||||
"-i",
|
||||
"input.mp4",
|
||||
"-map",
|
||||
"0:v:0",
|
||||
"-c:v",
|
||||
"copy",
|
||||
"-bsf:v",
|
||||
"hevc_mp4toannexb",
|
||||
"episode_0_eng.h265",
|
||||
],
|
||||
sequence,
|
||||
)
|
||||
|
||||
def test_non_h265_unmux_keeps_generic_copy_behavior(self):
|
||||
track_descriptor = TrackDescriptor(
|
||||
index=1,
|
||||
sub_index=0,
|
||||
track_type=TrackType.SUBTITLE,
|
||||
codec_name=TrackCodec.SRT,
|
||||
tags={},
|
||||
disposition_set=set(),
|
||||
)
|
||||
|
||||
sequence = cli.getUnmuxSequence(
|
||||
track_descriptor,
|
||||
"input.mkv",
|
||||
"episode_1_eng",
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
[
|
||||
"ffmpeg",
|
||||
"-y",
|
||||
"-i",
|
||||
"input.mkv",
|
||||
"-map",
|
||||
"0:s:0",
|
||||
"-c",
|
||||
"copy",
|
||||
"-f",
|
||||
"srt",
|
||||
"episode_1_eng.srt",
|
||||
],
|
||||
sequence,
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -1,5 +1,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import click
|
||||
from pathlib import Path
|
||||
import sys
|
||||
import unittest
|
||||
@@ -32,6 +33,9 @@ class StaticConfig:
|
||||
|
||||
|
||||
class FfxControllerTests(unittest.TestCase):
|
||||
def tearDown(self):
|
||||
FfxController.isFfmpegEncoderAvailable.cache_clear()
|
||||
|
||||
def make_context(self, video_encoder: VideoEncoder) -> dict:
|
||||
return {
|
||||
"logger": get_ffx_logger(),
|
||||
@@ -192,6 +196,62 @@ class FfxControllerTests(unittest.TestCase):
|
||||
self.assertIn("ENCODING_QUALITY=19", commands[0])
|
||||
mocked_info.assert_any_call("Setting quality 19 from pattern")
|
||||
|
||||
def test_generate_h264_tokens_prefers_libx264_when_available(self):
|
||||
context = self.make_context(VideoEncoder.H264)
|
||||
target_descriptor, source_descriptor = self.make_media_descriptors()
|
||||
controller = FfxController(context, target_descriptor, source_descriptor)
|
||||
|
||||
with patch.object(
|
||||
FfxController,
|
||||
"getSupportedSoftwareH264Encoder",
|
||||
return_value="libx264",
|
||||
):
|
||||
tokens = controller.generateH264Tokens(23)
|
||||
|
||||
self.assertEqual(
|
||||
["-c:v:0", "libx264", "-preset", "slow", "-crf", "23"],
|
||||
tokens,
|
||||
)
|
||||
|
||||
def test_generate_h264_tokens_falls_back_to_libopenh264_and_logs_warning(self):
|
||||
context = self.make_context(VideoEncoder.H264)
|
||||
target_descriptor, source_descriptor = self.make_media_descriptors()
|
||||
controller = FfxController(context, target_descriptor, source_descriptor)
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
FfxController,
|
||||
"getSupportedSoftwareH264Encoder",
|
||||
return_value="libopenh264",
|
||||
),
|
||||
patch.object(context["logger"], "warning") as mocked_warning,
|
||||
):
|
||||
tokens = controller.generateH264Tokens(23)
|
||||
|
||||
self.assertEqual(
|
||||
["-c:v:0", "libopenh264", "-pix_fmt", "yuv420p"],
|
||||
tokens,
|
||||
)
|
||||
mocked_warning.assert_called_once_with(
|
||||
"libx264 encoder unavailable; falling back to libopenh264 for H.264 encoding."
|
||||
)
|
||||
|
||||
def test_generate_h264_tokens_raises_when_no_supported_software_encoder_exists(self):
|
||||
context = self.make_context(VideoEncoder.H264)
|
||||
target_descriptor, source_descriptor = self.make_media_descriptors()
|
||||
controller = FfxController(context, target_descriptor, source_descriptor)
|
||||
|
||||
with patch.object(
|
||||
FfxController,
|
||||
"getSupportedSoftwareH264Encoder",
|
||||
return_value=None,
|
||||
):
|
||||
with self.assertRaisesRegex(
|
||||
click.ClickException,
|
||||
"no supported software H.264 encoder is available",
|
||||
):
|
||||
controller.generateH264Tokens(23)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
||||
82
tests/unit/test_file_properties_asset_probe.py
Normal file
82
tests/unit/test_file_properties_asset_probe.py
Normal file
@@ -0,0 +1,82 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
|
||||
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
|
||||
|
||||
if str(SRC_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(SRC_ROOT))
|
||||
|
||||
|
||||
from ffx.file_properties import FileProperties # noqa: E402
|
||||
from ffx.i18n import set_current_language # noqa: E402
|
||||
from ffx.logging_utils import get_ffx_logger # noqa: E402
|
||||
from ffx.track_codec import TrackCodec # noqa: E402
|
||||
from ffx.track_type import TrackType # noqa: E402
|
||||
from tests.support.ffx_bundle import SourceTrackSpec, create_source_fixture # noqa: E402
|
||||
|
||||
|
||||
class StaticConfig:
|
||||
def __init__(self, data: dict):
|
||||
self._data = data
|
||||
|
||||
def getData(self):
|
||||
return self._data
|
||||
|
||||
|
||||
class FilePropertiesAssetProbeTests(unittest.TestCase):
|
||||
def tearDown(self):
|
||||
set_current_language("de")
|
||||
|
||||
def test_boruto_webm_probe_recognizes_webm_stream_codecs(self):
|
||||
context = {
|
||||
"logger": get_ffx_logger(),
|
||||
"config": StaticConfig({}),
|
||||
"language": "de",
|
||||
"use_pattern": False,
|
||||
}
|
||||
set_current_language("de")
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
media_path = create_source_fixture(
|
||||
Path(tmpdir),
|
||||
"fixture.webm",
|
||||
[
|
||||
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
|
||||
SourceTrackSpec(TrackType.AUDIO, identity="audio-1", language="eng"),
|
||||
SourceTrackSpec(
|
||||
TrackType.SUBTITLE,
|
||||
identity="subtitle-2",
|
||||
language="eng",
|
||||
subtitle_lines=("Lorem ipsum dolor sit amet.",),
|
||||
),
|
||||
],
|
||||
duration_seconds=3,
|
||||
video_encoder="libvpx-vp9",
|
||||
video_encoder_options=("-b:v", "0", "-crf", "45"),
|
||||
audio_encoder="libopus",
|
||||
audio_encoder_options=("-b:a", "48k"),
|
||||
subtitle_encoder="webvtt",
|
||||
)
|
||||
|
||||
file_properties = FileProperties(context, str(media_path))
|
||||
tracks = file_properties.getMediaDescriptor().getTrackDescriptors()
|
||||
|
||||
subtitle_codecs = [
|
||||
track.getCodec()
|
||||
for track in tracks
|
||||
if track.getType() == TrackType.SUBTITLE
|
||||
]
|
||||
|
||||
self.assertIn(TrackCodec.VP9, [track.getCodec() for track in tracks])
|
||||
self.assertIn(TrackCodec.OPUS, [track.getCodec() for track in tracks])
|
||||
self.assertTrue(subtitle_codecs)
|
||||
self.assertTrue(all(codec == TrackCodec.WEBVTT for codec in subtitle_codecs))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -16,8 +16,10 @@ if str(SRC_ROOT) not in sys.path:
|
||||
from ffx.logging_utils import ( # noqa: E402
|
||||
CONSOLE_HANDLER_NAME,
|
||||
FILE_HANDLER_NAME,
|
||||
MUTED_CONSOLE_LEVEL,
|
||||
configure_ffx_logger,
|
||||
get_ffx_logger,
|
||||
set_ffx_console_logging_enabled,
|
||||
)
|
||||
|
||||
|
||||
@@ -81,6 +83,33 @@ class LoggingUtilsTests(unittest.TestCase):
|
||||
|
||||
self.cleanup_logger(logger_name)
|
||||
|
||||
def test_set_ffx_console_logging_enabled_mutes_and_restores_console_handler(self):
|
||||
logger_name = "ffx-test-console-mute"
|
||||
self.cleanup_logger(logger_name)
|
||||
|
||||
with tempfile.TemporaryDirectory() as tempdir:
|
||||
log_path = Path(tempdir) / "ffx.log"
|
||||
|
||||
logger = configure_ffx_logger(
|
||||
str(log_path),
|
||||
logging.DEBUG,
|
||||
logging.INFO,
|
||||
name=logger_name,
|
||||
)
|
||||
console_handler = next(
|
||||
handler for handler in logger.handlers if handler.get_name() == CONSOLE_HANDLER_NAME
|
||||
)
|
||||
|
||||
self.assertEqual(logging.INFO, console_handler.level)
|
||||
|
||||
set_ffx_console_logging_enabled(logger, enabled=False)
|
||||
self.assertEqual(MUTED_CONSOLE_LEVEL, console_handler.level)
|
||||
|
||||
set_ffx_console_logging_enabled(logger, enabled=True)
|
||||
self.assertEqual(logging.INFO, console_handler.level)
|
||||
|
||||
self.cleanup_logger(logger_name)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
||||
@@ -247,7 +247,8 @@ class MediaDescriptorChangeSetTests(unittest.TestCase):
|
||||
self.assertIn("title=German", metadata_tokens)
|
||||
self.assertNotIn("title=Deutsch", metadata_tokens)
|
||||
|
||||
def test_non_subtitle_track_without_title_does_not_get_language_name(self):
|
||||
def test_audio_track_without_title_gets_language_name_when_normalization_enabled(self):
|
||||
set_current_language("de")
|
||||
context = {
|
||||
"logger": get_ffx_logger(),
|
||||
"config": StaticConfig({}),
|
||||
@@ -278,6 +279,73 @@ class MediaDescriptorChangeSetTests(unittest.TestCase):
|
||||
|
||||
self.assertIn("-metadata:s:a:0", metadata_tokens)
|
||||
self.assertIn("language=deu", metadata_tokens)
|
||||
self.assertIn("title=Deutsch", metadata_tokens)
|
||||
|
||||
def test_video_track_without_title_gets_language_name_when_normalization_enabled(self):
|
||||
set_current_language("de")
|
||||
context = {
|
||||
"logger": get_ffx_logger(),
|
||||
"config": StaticConfig({}),
|
||||
}
|
||||
|
||||
source_track = TrackDescriptor(
|
||||
index=0,
|
||||
source_index=0,
|
||||
sub_index=0,
|
||||
track_type=TrackType.VIDEO,
|
||||
tags={"language": "ger"},
|
||||
)
|
||||
target_track = TrackDescriptor(
|
||||
index=0,
|
||||
source_index=0,
|
||||
sub_index=0,
|
||||
track_type=TrackType.VIDEO,
|
||||
tags={"language": "ger"},
|
||||
)
|
||||
|
||||
change_set = MediaDescriptorChangeSet(
|
||||
context,
|
||||
MediaDescriptor(track_descriptors=[target_track]),
|
||||
MediaDescriptor(track_descriptors=[source_track]),
|
||||
)
|
||||
|
||||
metadata_tokens = change_set.generateMetadataTokens()
|
||||
|
||||
self.assertIn("language=deu", metadata_tokens)
|
||||
self.assertIn("title=Deutsch", metadata_tokens)
|
||||
|
||||
def test_changed_track_language_does_not_autofill_title_when_title_already_exists(self):
|
||||
set_current_language("de")
|
||||
context = {
|
||||
"logger": get_ffx_logger(),
|
||||
"config": StaticConfig({}),
|
||||
}
|
||||
|
||||
source_track = TrackDescriptor(
|
||||
index=0,
|
||||
source_index=0,
|
||||
sub_index=0,
|
||||
track_type=TrackType.SUBTITLE,
|
||||
tags={"language": "ger", "title": "Deutsch [FN]"},
|
||||
)
|
||||
target_track = TrackDescriptor(
|
||||
index=0,
|
||||
source_index=0,
|
||||
sub_index=0,
|
||||
track_type=TrackType.SUBTITLE,
|
||||
tags={"language": "jpn", "title": "Deutsch [FN]"},
|
||||
)
|
||||
|
||||
change_set = MediaDescriptorChangeSet(
|
||||
context,
|
||||
MediaDescriptor(track_descriptors=[target_track]),
|
||||
MediaDescriptor(track_descriptors=[source_track]),
|
||||
)
|
||||
|
||||
metadata_tokens = change_set.generateMetadataTokens()
|
||||
|
||||
self.assertIn("language=jpn", metadata_tokens)
|
||||
self.assertNotIn("title=Japanisch", metadata_tokens)
|
||||
self.assertNotIn("title=Deutsch", metadata_tokens)
|
||||
|
||||
def test_target_only_tracks_still_emit_remove_tokens_for_configured_stream_keys(self):
|
||||
|
||||
@@ -15,9 +15,11 @@ if str(SRC_ROOT) not in sys.path:
|
||||
|
||||
|
||||
from ffx.logging_utils import get_ffx_logger # noqa: E402
|
||||
from ffx.helper import LogLevel # noqa: E402
|
||||
from ffx.media_descriptor import MediaDescriptor # noqa: E402
|
||||
from ffx.metadata_editor import ( # noqa: E402
|
||||
apply_metadata_edits,
|
||||
build_metadata_edit_command,
|
||||
build_metadata_edit_context,
|
||||
create_temporary_output_path,
|
||||
)
|
||||
@@ -32,6 +34,16 @@ class StaticConfig:
|
||||
return {}
|
||||
|
||||
|
||||
class NotificationCollector:
|
||||
def __init__(self) -> None:
|
||||
self.messages: list[str] = []
|
||||
self.levels: list[LogLevel | None] = []
|
||||
|
||||
def __call__(self, message: str, level: LogLevel | None = None) -> None:
|
||||
self.messages.append(message)
|
||||
self.levels.append(level)
|
||||
|
||||
|
||||
def make_context(*, dry_run: bool = False) -> dict:
|
||||
return {
|
||||
"logger": get_ffx_logger(),
|
||||
@@ -77,15 +89,45 @@ class MetadataEditorTests(unittest.TestCase):
|
||||
self.assertEqual(".mkv", Path(temporary_path).suffix)
|
||||
self.assertEqual(Path(source_path).parent, Path(temporary_path).parent)
|
||||
|
||||
def test_build_metadata_edit_command_maps_all_streams_and_uses_single_copy_codec(self):
|
||||
context = build_metadata_edit_context(make_context())
|
||||
baseline_descriptor = make_descriptor()
|
||||
draft_descriptor = baseline_descriptor.clone(context=context)
|
||||
|
||||
command = build_metadata_edit_command(
|
||||
context,
|
||||
"/tmp/example.mkv",
|
||||
"/tmp/.edit.mkv",
|
||||
baseline_descriptor,
|
||||
draft_descriptor,
|
||||
)
|
||||
|
||||
self.assertEqual(1, command.count("-map"))
|
||||
self.assertEqual(1, command.count("-c"))
|
||||
self.assertNotIn("-c:v:0", command)
|
||||
self.assertNotIn("-c:a:0", command)
|
||||
self.assertNotIn("-c:s:0", command)
|
||||
self.assertEqual(
|
||||
["-map", "0", "-c", "copy"],
|
||||
command[command.index("-map"):command.index("-c") + 2],
|
||||
)
|
||||
|
||||
def test_apply_metadata_edits_rewrites_via_temporary_file_then_replaces_source(self):
|
||||
context = make_context()
|
||||
baseline_descriptor = make_descriptor()
|
||||
draft_descriptor = baseline_descriptor.clone(context=context)
|
||||
source_path = "/tmp/example.mkv"
|
||||
expected_command = build_metadata_edit_command(
|
||||
build_metadata_edit_context(context),
|
||||
source_path,
|
||||
"/tmp/.edit.mkv",
|
||||
baseline_descriptor,
|
||||
draft_descriptor,
|
||||
)
|
||||
|
||||
with (
|
||||
patch("ffx.metadata_editor.create_temporary_output_path", return_value="/tmp/.edit.mkv"),
|
||||
patch("ffx.metadata_editor.FfxController.runJob") as mocked_run_job,
|
||||
patch("ffx.metadata_editor.executeProcess", return_value=("", "", 0)) as mocked_execute,
|
||||
patch("ffx.metadata_editor.os.replace") as mocked_replace,
|
||||
):
|
||||
result = apply_metadata_edits(
|
||||
@@ -95,32 +137,43 @@ class MetadataEditorTests(unittest.TestCase):
|
||||
draft_descriptor,
|
||||
)
|
||||
|
||||
mocked_run_job.assert_called_once_with(
|
||||
source_path,
|
||||
"/tmp/.edit.mkv",
|
||||
targetFormat="",
|
||||
chainIteration=[],
|
||||
)
|
||||
mocked_execute.assert_called_once_with(expected_command, context=build_metadata_edit_context(context))
|
||||
mocked_replace.assert_called_once_with("/tmp/.edit.mkv", source_path)
|
||||
self.assertEqual(
|
||||
{
|
||||
"applied": True,
|
||||
"dry_run": False,
|
||||
"target_path": source_path,
|
||||
"command_sequence": expected_command,
|
||||
},
|
||||
{
|
||||
"applied": result["applied"],
|
||||
"dry_run": result["dry_run"],
|
||||
"target_path": result["target_path"],
|
||||
"command_sequence": result["command_sequence"],
|
||||
},
|
||||
result,
|
||||
)
|
||||
self.assertIn("timings", result)
|
||||
self.assertIn("ffmpeg_seconds", result["timings"])
|
||||
self.assertIn("replace_seconds", result["timings"])
|
||||
self.assertIn("write_seconds", result["timings"])
|
||||
|
||||
def test_apply_metadata_edits_dry_run_skips_replace_and_cleans_temp_path(self):
|
||||
context = make_context(dry_run=True)
|
||||
baseline_descriptor = make_descriptor()
|
||||
draft_descriptor = baseline_descriptor.clone(context=context)
|
||||
notifications = NotificationCollector()
|
||||
expected_command = build_metadata_edit_command(
|
||||
build_metadata_edit_context(context),
|
||||
"/tmp/example.mkv",
|
||||
"/tmp/.edit.mkv",
|
||||
baseline_descriptor,
|
||||
draft_descriptor,
|
||||
)
|
||||
|
||||
with (
|
||||
patch("ffx.metadata_editor.create_temporary_output_path", return_value="/tmp/.edit.mkv"),
|
||||
patch("ffx.metadata_editor.FfxController.runJob") as mocked_run_job,
|
||||
patch("ffx.metadata_editor.os.path.exists", return_value=True),
|
||||
patch("ffx.metadata_editor.os.remove") as mocked_remove,
|
||||
patch("ffx.metadata_editor.executeProcess") as mocked_execute,
|
||||
patch("ffx.metadata_editor.os.replace") as mocked_replace,
|
||||
):
|
||||
result = apply_metadata_edits(
|
||||
@@ -128,19 +181,59 @@ class MetadataEditorTests(unittest.TestCase):
|
||||
"/tmp/example.mkv",
|
||||
baseline_descriptor,
|
||||
draft_descriptor,
|
||||
loggingHandler = notifications,
|
||||
)
|
||||
|
||||
mocked_run_job.assert_called_once()
|
||||
mocked_execute.assert_not_called()
|
||||
mocked_replace.assert_not_called()
|
||||
mocked_remove.assert_called_once_with("/tmp/.edit.mkv")
|
||||
self.assertEqual(["ffmpeg dry-run prepared."], notifications.messages)
|
||||
self.assertEqual([None], notifications.levels)
|
||||
self.assertEqual(
|
||||
{
|
||||
"applied": False,
|
||||
"dry_run": True,
|
||||
"target_path": "/tmp/.edit.mkv",
|
||||
"command_sequence": expected_command,
|
||||
},
|
||||
{
|
||||
"applied": result["applied"],
|
||||
"dry_run": result["dry_run"],
|
||||
"target_path": result["target_path"],
|
||||
"command_sequence": result["command_sequence"],
|
||||
},
|
||||
result,
|
||||
)
|
||||
self.assertEqual(
|
||||
{
|
||||
"ffmpeg_seconds": 0.0,
|
||||
"replace_seconds": 0.0,
|
||||
"write_seconds": 0.0,
|
||||
},
|
||||
result["timings"],
|
||||
)
|
||||
|
||||
def test_apply_metadata_edits_notifies_with_command_when_verbose(self):
|
||||
context = make_context()
|
||||
context["verbosity"] = 1
|
||||
baseline_descriptor = make_descriptor()
|
||||
draft_descriptor = baseline_descriptor.clone(context=context)
|
||||
notifications = NotificationCollector()
|
||||
|
||||
with (
|
||||
patch("ffx.metadata_editor.create_temporary_output_path", return_value="/tmp/.edit.mkv"),
|
||||
patch("ffx.metadata_editor.executeProcess", return_value=("", "", 0)),
|
||||
patch("ffx.metadata_editor.os.replace"),
|
||||
):
|
||||
apply_metadata_edits(
|
||||
context,
|
||||
"/tmp/example.mkv",
|
||||
baseline_descriptor,
|
||||
draft_descriptor,
|
||||
loggingHandler = notifications,
|
||||
)
|
||||
|
||||
self.assertEqual(1, len(notifications.messages))
|
||||
self.assertTrue(notifications.messages[0].startswith("ffmpeg: ffmpeg "))
|
||||
self.assertEqual([LogLevel.DEBUG], notifications.levels)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
import logging
|
||||
import sys
|
||||
import unittest
|
||||
from unittest.mock import patch
|
||||
@@ -57,9 +58,38 @@ class FakeScreen:
|
||||
self.app = FakeApp(screen_stack)
|
||||
|
||||
|
||||
class FakeRichLog:
|
||||
def __init__(self):
|
||||
self.messages = []
|
||||
|
||||
def write(self, message):
|
||||
self.messages.append(message)
|
||||
|
||||
|
||||
class FakeScreenWithLog:
|
||||
def __init__(self):
|
||||
self.log_view = FakeRichLog()
|
||||
|
||||
def query_one(self, selector, _widget_type=None):
|
||||
if selector == f"#{screen_support.SCREEN_LOG_VIEW_ID}":
|
||||
return self.log_view
|
||||
raise LookupError(selector)
|
||||
|
||||
|
||||
class FakeThreadedApp:
|
||||
def __init__(self, screen):
|
||||
self.screen = screen
|
||||
self.calls = []
|
||||
|
||||
def call_from_thread(self, func, *args):
|
||||
self.calls.append((func, args))
|
||||
return func(*args)
|
||||
|
||||
|
||||
class ScreenSupportTests(unittest.TestCase):
|
||||
def tearDown(self):
|
||||
set_current_language("de")
|
||||
screen_support.set_screen_log_pane_enabled(False)
|
||||
|
||||
def make_context(self):
|
||||
return {
|
||||
@@ -168,6 +198,63 @@ class ScreenSupportTests(unittest.TestCase):
|
||||
self.assertGreater(len(translated), 8)
|
||||
self.assertEqual(len(translated) + 2, screen_support.localized_column_width(translated, 8))
|
||||
|
||||
def test_build_screen_log_pane_is_hidden_when_debug_mode_is_disabled(self):
|
||||
screen_support.set_screen_log_pane_enabled(False)
|
||||
|
||||
log_pane = screen_support.build_screen_log_pane()
|
||||
|
||||
self.assertFalse(log_pane.display)
|
||||
|
||||
def test_build_screen_log_pane_is_collapsed_when_debug_mode_is_enabled(self):
|
||||
screen_support.set_screen_log_pane_enabled(True)
|
||||
|
||||
log_pane = screen_support.build_screen_log_pane()
|
||||
|
||||
self.assertIsInstance(log_pane, screen_support.ResizableScreenLogPane)
|
||||
self.assertEqual(screen_support.SCREEN_LOG_PANE_ID, log_pane.id)
|
||||
self.assertTrue(log_pane.collapsed)
|
||||
|
||||
def test_resizable_screen_log_pane_clamps_height_to_minimum(self):
|
||||
log_pane = screen_support.ResizableScreenLogPane()
|
||||
|
||||
log_pane.set_log_height(1)
|
||||
|
||||
self.assertEqual(screen_support.SCREEN_LOG_MIN_HEIGHT, log_pane.get_log_height())
|
||||
|
||||
def test_configure_screen_log_handler_routes_logger_messages_to_active_screen(self):
|
||||
logger_name = "ffx-test-screen-log-handler"
|
||||
logger = logging.getLogger(logger_name)
|
||||
logger.setLevel(logging.DEBUG)
|
||||
logger.propagate = False
|
||||
|
||||
for handler in list(logger.handlers):
|
||||
logger.removeHandler(handler)
|
||||
handler.close()
|
||||
|
||||
screen = FakeScreenWithLog()
|
||||
app = FakeThreadedApp(screen)
|
||||
|
||||
try:
|
||||
handler = screen_support.configure_screen_log_handler(
|
||||
logger,
|
||||
app,
|
||||
enabled=True,
|
||||
)
|
||||
self.assertIsNotNone(handler)
|
||||
|
||||
logger.info("hello pane")
|
||||
|
||||
self.assertEqual(1, len(screen.log_view.messages))
|
||||
self.assertRegex(
|
||||
screen.log_view.messages[0],
|
||||
r"^ffx-test-screen-log-handler\s+INFO\s+\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} \| hello pane$",
|
||||
)
|
||||
finally:
|
||||
screen_support.configure_screen_log_handler(logger, app, enabled=False)
|
||||
for handler in list(logger.handlers):
|
||||
logger.removeHandler(handler)
|
||||
handler.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
||||
@@ -14,10 +14,13 @@ if str(SRC_ROOT) not in sys.path:
|
||||
|
||||
|
||||
from ffx.audio_layout import AudioLayout # noqa: E402
|
||||
from ffx.attachment_format import AttachmentFormat # noqa: E402
|
||||
from ffx.helper import DIFF_ADDED_KEY # noqa: E402
|
||||
from ffx.iso_language import IsoLanguage # noqa: E402
|
||||
from ffx.logging_utils import get_ffx_logger # noqa: E402
|
||||
from ffx.inspect_details_screen import InspectDetailsScreen # noqa: E402
|
||||
from ffx.i18n import set_current_language # noqa: E402
|
||||
from ffx.media_descriptor import MediaDescriptor # noqa: E402
|
||||
from ffx.media_edit_screen import MediaEditScreen # noqa: E402
|
||||
from ffx.pattern_details_screen import PatternDetailsScreen # noqa: E402
|
||||
from ffx.show_descriptor import ShowDescriptor # noqa: E402
|
||||
@@ -89,16 +92,21 @@ class FakeTagTable:
|
||||
|
||||
|
||||
class FakeMediaDescriptor:
|
||||
def __init__(self, track_descriptors):
|
||||
def __init__(self, track_descriptors, tags=None):
|
||||
self._track_descriptors = list(track_descriptors)
|
||||
self._tags = dict(tags or {})
|
||||
|
||||
def getTrackDescriptors(self):
|
||||
return list(self._track_descriptors)
|
||||
|
||||
def getTags(self):
|
||||
return dict(self._tags)
|
||||
|
||||
|
||||
class FakeValueWidget:
|
||||
def __init__(self, value):
|
||||
self.value = value
|
||||
self.disabled = False
|
||||
|
||||
|
||||
class FakeInputWidget:
|
||||
@@ -106,10 +114,21 @@ class FakeInputWidget:
|
||||
self.value = value
|
||||
|
||||
|
||||
class FakeStaticWidget:
|
||||
def __init__(self, value=""):
|
||||
self.value = value
|
||||
|
||||
def update(self, value):
|
||||
self.value = value
|
||||
|
||||
|
||||
class FakeSelectionListWidget:
|
||||
def __init__(self, selected):
|
||||
self.selected = selected
|
||||
|
||||
def add_option(self, _option):
|
||||
return None
|
||||
|
||||
|
||||
def make_track_descriptor(index, sub_index, track_type):
|
||||
return TrackDescriptor(
|
||||
@@ -182,6 +201,32 @@ class TagTableScreenStateTests(unittest.TestCase):
|
||||
self.assertEqual("German Audio", descriptor.getTitle())
|
||||
self.assertEqual("value", descriptor.getTags()["KEEP"])
|
||||
|
||||
def test_track_details_screen_preserves_attachment_format_for_attachment_tracks(self):
|
||||
screen = object.__new__(TrackDetailsScreen)
|
||||
screen.context = {"logger": get_ffx_logger()}
|
||||
screen._TrackDetailsScreen__trackDescriptor = None
|
||||
screen._TrackDetailsScreen__patternId = 5
|
||||
screen._TrackDetailsScreen__index = 4
|
||||
screen._TrackDetailsScreen__subIndex = 0
|
||||
screen._TrackDetailsScreen__trackCodec = TrackCodec.UNKNOWN
|
||||
screen._TrackDetailsScreen__attachmentFormat = AttachmentFormat.TTF
|
||||
screen._TrackDetailsScreen__draftTrackTags = {"filename": "font.ttf", "mimetype": "font/ttf"}
|
||||
|
||||
widgets = {
|
||||
"#type_select": FakeValueWidget(TrackType.ATTACHMENT),
|
||||
"#audio_layout_select": FakeValueWidget(AudioLayout.LAYOUT_UNDEFINED),
|
||||
"#language_select": FakeValueWidget(Select.NULL),
|
||||
"#title_input": FakeInputWidget(""),
|
||||
"#dispositions_selection_list": FakeSelectionListWidget(set()),
|
||||
}
|
||||
screen.query_one = lambda selector, _widget_type=None: widgets[selector]
|
||||
|
||||
descriptor = screen.getTrackDescriptorFromInput()
|
||||
|
||||
self.assertEqual(TrackType.ATTACHMENT, descriptor.getType())
|
||||
self.assertEqual(AttachmentFormat.TTF, descriptor.getAttachmentFormat())
|
||||
self.assertEqual(TrackCodec.UNKNOWN, descriptor.getCodec())
|
||||
|
||||
def test_track_details_screen_auto_sets_localized_title_from_selected_language(self):
|
||||
set_current_language("de")
|
||||
screen = object.__new__(TrackDetailsScreen)
|
||||
@@ -244,6 +289,49 @@ class TagTableScreenStateTests(unittest.TestCase):
|
||||
|
||||
self.assertEqual("Preset", widgets["#title_input"].value)
|
||||
|
||||
def test_track_details_screen_metadata_only_mount_shows_normalized_title_preview(self):
|
||||
set_current_language("de")
|
||||
screen = object.__new__(TrackDetailsScreen)
|
||||
screen._TrackDetailsScreen__index = 2
|
||||
screen._TrackDetailsScreen__subIndex = 0
|
||||
screen._TrackDetailsScreen__patternLabel = "demo"
|
||||
screen._TrackDetailsScreen__trackType = TrackType.AUDIO
|
||||
screen._TrackDetailsScreen__audioLayout = AudioLayout.LAYOUT_STEREO
|
||||
screen._TrackDetailsScreen__trackDescriptor = TrackDescriptor(
|
||||
index=2,
|
||||
source_index=2,
|
||||
sub_index=0,
|
||||
track_type=TrackType.AUDIO,
|
||||
codec_name=TrackCodec.DTS,
|
||||
audio_layout=AudioLayout.LAYOUT_STEREO,
|
||||
tags={"language": "ger"},
|
||||
)
|
||||
screen._TrackDetailsScreen__metadataOnly = True
|
||||
screen._TrackDetailsScreen__titleAutoManaged = True
|
||||
screen._TrackDetailsScreen__suppressTitleChanged = False
|
||||
screen._TrackDetailsScreen__lastAutoTitle = ""
|
||||
screen._TrackDetailsScreen__removeTrackKeys = []
|
||||
screen._TrackDetailsScreen__ignoreTrackKeys = []
|
||||
screen._TrackDetailsScreen__draftTrackTags = {}
|
||||
screen._TrackDetailsScreen__tagRowData = {}
|
||||
screen.updateTags = lambda: None
|
||||
|
||||
widgets = {
|
||||
"#index_label": FakeStaticWidget(),
|
||||
"#subindex_label": FakeStaticWidget(),
|
||||
"#pattern_label": FakeStaticWidget(),
|
||||
"#type_select": FakeValueWidget(None),
|
||||
"#audio_layout_select": FakeValueWidget(None),
|
||||
"#dispositions_selection_list": FakeSelectionListWidget(set()),
|
||||
"#language_select": FakeValueWidget(None),
|
||||
"#title_input": FakeInputWidget(""),
|
||||
}
|
||||
screen.query_one = lambda selector, _widget_type=None: widgets[selector]
|
||||
|
||||
screen.on_mount()
|
||||
|
||||
self.assertEqual("Deutsch", widgets["#title_input"].value)
|
||||
|
||||
def test_track_details_screen_language_options_are_sorted_by_localized_label(self):
|
||||
set_current_language("de")
|
||||
|
||||
@@ -326,12 +414,177 @@ class TagTableScreenStateTests(unittest.TestCase):
|
||||
screen.tracksTable = FakeTagTable()
|
||||
screen._sourceMediaDescriptor = FakeMediaDescriptor([first_track])
|
||||
screen._trackRowData = {}
|
||||
screen._applyNormalization = False
|
||||
|
||||
screen.updateTracks()
|
||||
|
||||
self.assertEqual(9, len(screen.tracksTable.columns))
|
||||
self.assertIn("A much longer updated title", screen.tracksTable.rows["row-0"])
|
||||
|
||||
def test_media_edit_screen_shows_normalized_audio_title_preview(self):
|
||||
set_current_language("de")
|
||||
audio_track = TrackDescriptor(
|
||||
index=1,
|
||||
source_index=1,
|
||||
sub_index=0,
|
||||
track_type=TrackType.AUDIO,
|
||||
codec_name=TrackCodec.DTS,
|
||||
audio_layout=AudioLayout.LAYOUT_STEREO,
|
||||
tags={"language": "ger"},
|
||||
)
|
||||
|
||||
screen = object.__new__(MediaEditScreen)
|
||||
screen.tracksTable = FakeTagTable()
|
||||
screen._sourceMediaDescriptor = FakeMediaDescriptor([audio_track])
|
||||
screen._trackRowData = {}
|
||||
screen._applyNormalization = True
|
||||
|
||||
screen.updateTracks()
|
||||
|
||||
self.assertIn("Deutsch", screen.tracksTable.rows["row-0"])
|
||||
|
||||
def test_media_edit_screen_shows_normalized_video_title_preview(self):
|
||||
set_current_language("de")
|
||||
video_track = TrackDescriptor(
|
||||
index=0,
|
||||
source_index=0,
|
||||
sub_index=0,
|
||||
track_type=TrackType.VIDEO,
|
||||
codec_name=TrackCodec.H264,
|
||||
tags={"language": "ger"},
|
||||
)
|
||||
|
||||
screen = object.__new__(MediaEditScreen)
|
||||
screen.tracksTable = FakeTagTable()
|
||||
screen._sourceMediaDescriptor = FakeMediaDescriptor([video_track])
|
||||
screen._trackRowData = {}
|
||||
screen._applyNormalization = True
|
||||
|
||||
screen.updateTracks()
|
||||
|
||||
self.assertIn("Deutsch", screen.tracksTable.rows["row-0"])
|
||||
|
||||
def test_media_edit_screen_toggle_normalization_refreshes_tracks(self):
|
||||
screen = object.__new__(MediaEditScreen)
|
||||
screen._applyNormalization = False
|
||||
|
||||
calls = []
|
||||
|
||||
screen.setApplyNormalization = lambda enabled: (
|
||||
setattr(screen, "_applyNormalization", bool(enabled)),
|
||||
calls.append("setApplyNormalization"),
|
||||
)
|
||||
screen.updateToggleButtons = lambda: calls.append("updateToggleButtons")
|
||||
screen.updateTracks = lambda: calls.append("updateTracks")
|
||||
screen.updateDifferences = lambda: calls.append("updateDifferences")
|
||||
screen.setMessage = lambda _message: calls.append("setMessage")
|
||||
|
||||
screen.action_toggle_normalization()
|
||||
|
||||
self.assertEqual(
|
||||
[
|
||||
"setApplyNormalization",
|
||||
"updateToggleButtons",
|
||||
"updateTracks",
|
||||
"updateDifferences",
|
||||
"setMessage",
|
||||
],
|
||||
calls,
|
||||
)
|
||||
|
||||
def test_media_edit_screen_handle_edit_track_updates_draft_descriptor(self):
|
||||
original_track = TrackDescriptor(
|
||||
index=1,
|
||||
source_index=1,
|
||||
sub_index=0,
|
||||
track_type=TrackType.SUBTITLE,
|
||||
codec_name=TrackCodec.UNKNOWN,
|
||||
tags={"language": "ger"},
|
||||
)
|
||||
context = {"logger": get_ffx_logger()}
|
||||
updated_track = original_track.clone(context=context)
|
||||
updated_track.getTags()["language"] = "eng"
|
||||
|
||||
screen = object.__new__(MediaEditScreen)
|
||||
screen.context = context
|
||||
screen._sourceMediaDescriptor = MediaDescriptor(
|
||||
context=context,
|
||||
track_descriptors=[original_track],
|
||||
)
|
||||
|
||||
calls = []
|
||||
screen.setMessage = lambda _message: calls.append("setMessage")
|
||||
screen.refreshAfterDraftChange = lambda: calls.append("refreshAfterDraftChange")
|
||||
|
||||
screen.handle_edit_track(updated_track)
|
||||
|
||||
self.assertEqual(
|
||||
"eng",
|
||||
screen._sourceMediaDescriptor.getTrackDescriptors()[0].getTags()["language"],
|
||||
)
|
||||
self.assertEqual(
|
||||
["setMessage", "refreshAfterDraftChange"],
|
||||
calls,
|
||||
)
|
||||
|
||||
def test_media_edit_screen_screen_resume_refreshes_draft_tables(self):
|
||||
screen = object.__new__(MediaEditScreen)
|
||||
screen.tracksTable = FakeTagTable()
|
||||
|
||||
calls = []
|
||||
screen.refreshAfterDraftChange = lambda: calls.append("refreshAfterDraftChange")
|
||||
screen.updateToggleButtons = lambda: calls.append("updateToggleButtons")
|
||||
|
||||
screen.on_screen_resume(None)
|
||||
|
||||
self.assertEqual(
|
||||
["refreshAfterDraftChange", "updateToggleButtons"],
|
||||
calls,
|
||||
)
|
||||
|
||||
def test_pattern_details_screen_screen_resume_refreshes_tables(self):
|
||||
screen = object.__new__(PatternDetailsScreen)
|
||||
screen.tracksTable = FakeTagTable()
|
||||
screen.tagsTable = FakeTagTable()
|
||||
screen.shiftedSeasonsTable = FakeTagTable()
|
||||
screen._PatternDetailsScreen__pattern = object()
|
||||
|
||||
calls = []
|
||||
screen.updateTags = lambda: calls.append("updateTags")
|
||||
screen.updateTracks = lambda: calls.append("updateTracks")
|
||||
screen.updateShiftedSeasons = lambda: calls.append("updateShiftedSeasons")
|
||||
|
||||
screen.on_screen_resume(None)
|
||||
|
||||
self.assertEqual(
|
||||
["updateTags", "updateTracks", "updateShiftedSeasons"],
|
||||
calls,
|
||||
)
|
||||
|
||||
def test_inspect_details_screen_handle_edit_pattern_refreshes_even_without_result(self):
|
||||
screen = object.__new__(InspectDetailsScreen)
|
||||
|
||||
calls = []
|
||||
screen.reloadProperties = lambda reset_draft=True: calls.append(
|
||||
("reloadProperties", reset_draft)
|
||||
)
|
||||
screen._currentPattern = None
|
||||
screen.updateMediaTags = lambda: calls.append("updateMediaTags")
|
||||
screen.updateTracks = lambda: calls.append("updateTracks")
|
||||
screen.updateDifferences = lambda: calls.append("updateDifferences")
|
||||
|
||||
screen.handle_edit_pattern(None)
|
||||
|
||||
self.assertEqual(
|
||||
[
|
||||
("reloadProperties", True),
|
||||
"updateMediaTags",
|
||||
"updateTracks",
|
||||
"updateDifferences",
|
||||
],
|
||||
calls,
|
||||
)
|
||||
|
||||
def test_pattern_details_screen_reads_selected_shifted_season_from_row_mapping(self):
|
||||
screen = object.__new__(PatternDetailsScreen)
|
||||
screen.shiftedSeasonsTable = FakeTagTable()
|
||||
@@ -438,6 +691,154 @@ class TagTableScreenStateTests(unittest.TestCase):
|
||||
self.assertNotIn(placeholder_key, screen._showRowData)
|
||||
self.assertEqual(0, screen.getRowIndexFromShowId(8))
|
||||
|
||||
def test_inspect_details_screen_update_tracks_shows_target_pattern_tracks(self):
|
||||
source_track = TrackDescriptor(
|
||||
index=1,
|
||||
source_index=1,
|
||||
sub_index=0,
|
||||
track_type=TrackType.SUBTITLE,
|
||||
codec_name=TrackCodec.UNKNOWN,
|
||||
tags={"language": "ger", "title": "German Full"},
|
||||
)
|
||||
target_track = TrackDescriptor(
|
||||
index=1,
|
||||
source_index=1,
|
||||
sub_index=0,
|
||||
track_type=TrackType.SUBTITLE,
|
||||
codec_name=TrackCodec.UNKNOWN,
|
||||
tags={"language": "eng", "title": "English Full"},
|
||||
)
|
||||
|
||||
screen = object.__new__(InspectDetailsScreen)
|
||||
screen.tracksTable = FakeTagTable()
|
||||
screen._sourceMediaDescriptor = FakeMediaDescriptor([source_track])
|
||||
screen._targetMediaDescriptor = FakeMediaDescriptor([target_track])
|
||||
screen._currentPattern = object()
|
||||
screen._trackRowData = {}
|
||||
screen._applyNormalization = False
|
||||
|
||||
screen.updateTracks()
|
||||
|
||||
self.assertIn("English Full", screen.tracksTable.rows["row-0"])
|
||||
self.assertIs(target_track, screen.getSelectedTrackDescriptor())
|
||||
|
||||
def test_inspect_details_screen_update_tracks_blanks_irrelevant_attachment_fields(self):
|
||||
attachment_track = TrackDescriptor(
|
||||
index=4,
|
||||
source_index=4,
|
||||
sub_index=0,
|
||||
track_type=TrackType.ATTACHMENT,
|
||||
attachment_format=AttachmentFormat.TTF,
|
||||
tags={"filename": "font.ttf", "mimetype": "font/ttf"},
|
||||
)
|
||||
|
||||
screen = object.__new__(InspectDetailsScreen)
|
||||
screen.tracksTable = FakeTagTable()
|
||||
screen._sourceMediaDescriptor = FakeMediaDescriptor([attachment_track])
|
||||
screen._targetMediaDescriptor = None
|
||||
screen._currentPattern = None
|
||||
screen._trackRowData = {}
|
||||
screen._applyNormalization = False
|
||||
|
||||
screen.updateTracks()
|
||||
|
||||
row = screen.tracksTable.rows["row-0"]
|
||||
|
||||
self.assertEqual("4", row[0])
|
||||
self.assertEqual(" ", row[3])
|
||||
self.assertEqual(" ", row[7])
|
||||
self.assertEqual(" ", row[8])
|
||||
|
||||
def test_inspect_details_screen_maps_target_selection_back_to_source_track(self):
|
||||
source_track = TrackDescriptor(
|
||||
index=3,
|
||||
source_index=7,
|
||||
sub_index=1,
|
||||
track_type=TrackType.SUBTITLE,
|
||||
codec_name=TrackCodec.UNKNOWN,
|
||||
tags={"language": "ger"},
|
||||
)
|
||||
target_track = TrackDescriptor(
|
||||
index=1,
|
||||
source_index=7,
|
||||
sub_index=0,
|
||||
track_type=TrackType.SUBTITLE,
|
||||
codec_name=TrackCodec.UNKNOWN,
|
||||
tags={"language": "eng"},
|
||||
)
|
||||
|
||||
screen = object.__new__(InspectDetailsScreen)
|
||||
screen.tracksTable = FakeTagTable()
|
||||
screen._sourceMediaDescriptor = FakeMediaDescriptor([source_track])
|
||||
screen._targetMediaDescriptor = FakeMediaDescriptor([target_track])
|
||||
screen._currentPattern = object()
|
||||
screen._trackRowData = {}
|
||||
screen._applyNormalization = False
|
||||
|
||||
screen.updateTracks()
|
||||
|
||||
self.assertIs(source_track, screen.getTrackEditSourceDescriptor())
|
||||
|
||||
def test_inspect_details_screen_action_update_pattern_uses_existing_change_set_before_reload(self):
|
||||
class _FakePattern:
|
||||
def getPattern(self):
|
||||
return r"demo_(s[0-9]+e[0-9]+)\.mkv"
|
||||
|
||||
def getId(self):
|
||||
return 9
|
||||
|
||||
class _FakeTagController:
|
||||
def __init__(self, calls):
|
||||
self._calls = calls
|
||||
|
||||
def deleteMediaTagByKey(self, pattern_id, key):
|
||||
self._calls.append(("deleteMediaTagByKey", pattern_id, key))
|
||||
|
||||
calls = []
|
||||
|
||||
screen = object.__new__(InspectDetailsScreen)
|
||||
screen._currentPattern = _FakePattern()
|
||||
screen._mediaChangeSetObj = {
|
||||
"tags": {
|
||||
DIFF_ADDED_KEY: {"TITLE": "Demo"},
|
||||
}
|
||||
}
|
||||
screen._tac = _FakeTagController(calls)
|
||||
screen._tc = type(
|
||||
"_FakeTrackController",
|
||||
(),
|
||||
{
|
||||
"addTrack": staticmethod(lambda *_args, **_kwargs: None),
|
||||
"deleteTrack": staticmethod(lambda *_args, **_kwargs: None),
|
||||
"setDispositionState": staticmethod(lambda *_args, **_kwargs: None),
|
||||
},
|
||||
)()
|
||||
screen._sourceMediaDescriptor = FakeMediaDescriptor([], tags={})
|
||||
screen._targetMediaDescriptor = FakeMediaDescriptor([])
|
||||
screen.getPatternObjFromInput = lambda: {
|
||||
"show_id": 1,
|
||||
"pattern": r"demo_(s[0-9]+e[0-9]+)\.mkv",
|
||||
}
|
||||
screen.reloadProperties = lambda reset_draft=True: calls.append(
|
||||
("reloadProperties", reset_draft)
|
||||
)
|
||||
screen.updateMediaTags = lambda: calls.append("updateMediaTags")
|
||||
screen.updateTracks = lambda: calls.append("updateTracks")
|
||||
screen.updateDifferences = lambda: calls.append("updateDifferences")
|
||||
|
||||
screen.action_update_pattern()
|
||||
|
||||
self.assertEqual(
|
||||
[
|
||||
("deleteMediaTagByKey", 9, "TITLE"),
|
||||
("reloadProperties", True),
|
||||
"updateMediaTags",
|
||||
"updateTracks",
|
||||
"updateDifferences",
|
||||
],
|
||||
calls,
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
||||
25
tests/unit/test_track_codec_identification.py
Normal file
25
tests/unit/test_track_codec_identification.py
Normal file
@@ -0,0 +1,25 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
import sys
|
||||
import unittest
|
||||
|
||||
|
||||
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
|
||||
|
||||
if str(SRC_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(SRC_ROOT))
|
||||
|
||||
|
||||
from ffx.track_codec import TrackCodec # noqa: E402
|
||||
|
||||
|
||||
class TrackCodecIdentificationTests(unittest.TestCase):
|
||||
def test_identify_modern_webm_codecs(self):
|
||||
self.assertEqual(TrackCodec.VP9, TrackCodec.identify("vp9"))
|
||||
self.assertEqual(TrackCodec.OPUS, TrackCodec.identify("opus"))
|
||||
self.assertEqual(TrackCodec.WEBVTT, TrackCodec.identify("webvtt"))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
87
tests/unit/test_track_descriptor_probe.py
Normal file
87
tests/unit/test_track_descriptor_probe.py
Normal file
@@ -0,0 +1,87 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
import sys
|
||||
import unittest
|
||||
|
||||
|
||||
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
|
||||
|
||||
if str(SRC_ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(SRC_ROOT))
|
||||
|
||||
|
||||
from ffx.attachment_format import AttachmentFormat # noqa: E402
|
||||
from ffx.media_descriptor import MediaDescriptor # noqa: E402
|
||||
from ffx.track_codec import TrackCodec # noqa: E402
|
||||
from ffx.track_descriptor import TrackDescriptor # noqa: E402
|
||||
from ffx.track_type import TrackType # noqa: E402
|
||||
|
||||
|
||||
ASSETS_ROOT = Path(__file__).resolve().parents[1] / "assets"
|
||||
|
||||
|
||||
class TrackDescriptorProbeTests(unittest.TestCase):
|
||||
def test_attachment_without_codec_name_uses_font_metadata_to_identify_ttf(self):
|
||||
descriptor = TrackDescriptor.fromFfprobe(
|
||||
{
|
||||
"index": 4,
|
||||
"codec_type": "attachment",
|
||||
"disposition": {"default": 0},
|
||||
"tags": {
|
||||
"filename": "AmazonEmberTanuki-Italic.ttf",
|
||||
"mimetype": "font/ttf",
|
||||
},
|
||||
},
|
||||
subIndex=0,
|
||||
)
|
||||
|
||||
self.assertIsNotNone(descriptor)
|
||||
self.assertEqual(TrackType.ATTACHMENT, descriptor.getType())
|
||||
self.assertEqual(AttachmentFormat.TTF, descriptor.getAttachmentFormat())
|
||||
self.assertEqual(AttachmentFormat.TTF, descriptor.getFormatDescriptor())
|
||||
self.assertEqual(TrackCodec.UNKNOWN, descriptor.getCodec())
|
||||
|
||||
def test_attachment_without_codec_name_still_probes_as_unknown_when_not_font(self):
|
||||
descriptor = TrackDescriptor.fromFfprobe(
|
||||
{
|
||||
"index": 9,
|
||||
"codec_type": "attachment",
|
||||
"disposition": {"default": 0},
|
||||
"tags": {
|
||||
"filename": "cover.bin",
|
||||
"mimetype": "application/octet-stream",
|
||||
},
|
||||
},
|
||||
subIndex=0,
|
||||
)
|
||||
|
||||
self.assertIsNotNone(descriptor)
|
||||
self.assertEqual(TrackType.ATTACHMENT, descriptor.getType())
|
||||
self.assertEqual(AttachmentFormat.UNKNOWN, descriptor.getAttachmentFormat())
|
||||
self.assertEqual(TrackCodec.UNKNOWN, descriptor.getCodec())
|
||||
|
||||
def test_media_descriptor_from_boruto_probe_json_handles_attachment_streams_without_codec_name(self):
|
||||
probe_payload = json.loads(
|
||||
(ASSETS_ROOT / "ffprobe.out.json").read_text(encoding="utf-8")
|
||||
)
|
||||
|
||||
descriptor = MediaDescriptor.fromFfprobe(
|
||||
{"logger": None},
|
||||
probe_payload["format"],
|
||||
probe_payload["streams"],
|
||||
)
|
||||
|
||||
track_descriptors = descriptor.getTrackDescriptors()
|
||||
attachment_tracks = descriptor.getAttachmentTracks()
|
||||
|
||||
self.assertEqual(14, len(track_descriptors))
|
||||
self.assertEqual(10, len(attachment_tracks))
|
||||
self.assertTrue(
|
||||
all(track.getAttachmentFormat() == AttachmentFormat.TTF for track in attachment_tracks)
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
448
tools/merge_dev_into_main.sh
Executable file
448
tools/merge_dev_into_main.sh
Executable file
@@ -0,0 +1,448 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
DEV_BRANCH="dev"
|
||||
MAIN_BRANCH="main"
|
||||
ORIGIN_REMOTE="origin"
|
||||
DEFAULT_AGENT_DEVELOPMENT_PATHS=(
|
||||
"AGENTS.md"
|
||||
"SCRATCHPAD.md"
|
||||
"guidance"
|
||||
"requirements"
|
||||
"prompts"
|
||||
"process"
|
||||
"tools/merge_dev_into_main.sh"
|
||||
)
|
||||
AGENT_DEVELOPMENT_PATHS=("${DEFAULT_AGENT_DEVELOPMENT_PATHS[@]}")
|
||||
|
||||
CURRENT_BRANCH="${DEV_BRANCH}"
|
||||
ASSUME_YES=0
|
||||
DRY_RUN=0
|
||||
SKIP_TESTS=0
|
||||
|
||||
usage() {
|
||||
cat <<EOF
|
||||
Usage: $(basename "$0") [--yes] [--dry-run] [--skip-tests] [--help]
|
||||
|
||||
Merge the local ${DEV_BRANCH} branch into ${MAIN_BRANCH}, remove agent-development files
|
||||
from ${MAIN_BRANCH}, auto-resolve merge conflicts limited to those cleanup paths,
|
||||
create a release merge commit and tag, push to ${ORIGIN_REMOTE}/${MAIN_BRANCH}, and
|
||||
switch back to ${DEV_BRANCH}.
|
||||
|
||||
Options:
|
||||
--yes Skip the interactive confirmation prompt.
|
||||
--dry-run Print the validated release plan without changing git state.
|
||||
--skip-tests Skip the default pre-release test gate (./tools/test.sh).
|
||||
--help Show this help text.
|
||||
|
||||
Environment overrides:
|
||||
FFX_RELEASE_CLEAN_PATHS Colon-separated path list to remove from ${MAIN_BRANCH}
|
||||
after merging ${DEV_BRANCH}. Defaults to:
|
||||
${DEFAULT_AGENT_DEVELOPMENT_PATHS[*]}
|
||||
EOF
|
||||
}
|
||||
|
||||
fail() {
|
||||
printf '%s\n' "$*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
cleanup() {
|
||||
local exit_code="$1"
|
||||
|
||||
trap - EXIT
|
||||
|
||||
if git rev-parse -q --verify MERGE_HEAD >/dev/null 2>&1; then
|
||||
printf 'Merge is incomplete; aborting merge on %s...\n' "${CURRENT_BRANCH}" >&2
|
||||
git merge --abort >/dev/null 2>&1 || true
|
||||
fi
|
||||
|
||||
if [ "${CURRENT_BRANCH}" != "${DEV_BRANCH}" ]; then
|
||||
printf 'Switching back to %s...\n' "${DEV_BRANCH}" >&2
|
||||
git switch "${DEV_BRANCH}" >/dev/null 2>&1 || true
|
||||
CURRENT_BRANCH="${DEV_BRANCH}"
|
||||
fi
|
||||
|
||||
exit "${exit_code}"
|
||||
}
|
||||
|
||||
load_cleanup_paths() {
|
||||
if [ -n "${FFX_RELEASE_CLEAN_PATHS:-}" ]; then
|
||||
IFS=':' read -r -a AGENT_DEVELOPMENT_PATHS <<< "${FFX_RELEASE_CLEAN_PATHS}"
|
||||
fi
|
||||
|
||||
if [ "${#AGENT_DEVELOPMENT_PATHS[@]}" -eq 0 ]; then
|
||||
fail "Release cleanup path list is empty."
|
||||
fi
|
||||
}
|
||||
|
||||
path_is_cleanup_target() {
|
||||
local candidate_path="$1"
|
||||
local cleanup_path=""
|
||||
|
||||
for cleanup_path in "${AGENT_DEVELOPMENT_PATHS[@]}"; do
|
||||
case "${candidate_path}" in
|
||||
"${cleanup_path}"|"${cleanup_path}"/*)
|
||||
return 0
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
auto_resolve_cleanup_conflicts() {
|
||||
local unmerged_paths=()
|
||||
local non_cleanup_conflicts=()
|
||||
local remaining_conflicts=()
|
||||
local conflicted_path=""
|
||||
|
||||
mapfile -t unmerged_paths < <(git diff --name-only --diff-filter=U)
|
||||
if [ "${#unmerged_paths[@]}" -eq 0 ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
for conflicted_path in "${unmerged_paths[@]}"; do
|
||||
if ! path_is_cleanup_target "${conflicted_path}"; then
|
||||
non_cleanup_conflicts+=("${conflicted_path}")
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "${#non_cleanup_conflicts[@]}" -ne 0 ]; then
|
||||
printf 'Merge produced non-cleanup conflicts:\n' >&2
|
||||
for conflicted_path in "${non_cleanup_conflicts[@]}"; do
|
||||
printf ' - %s\n' "${conflicted_path}" >&2
|
||||
done
|
||||
return 1
|
||||
fi
|
||||
|
||||
printf 'Auto-resolving merge conflicts for release-cleanup paths:\n'
|
||||
for conflicted_path in "${unmerged_paths[@]}"; do
|
||||
printf ' - %s\n' "${conflicted_path}"
|
||||
done
|
||||
|
||||
git rm -r -f --ignore-unmatch "${AGENT_DEVELOPMENT_PATHS[@]}" >/dev/null
|
||||
|
||||
mapfile -t remaining_conflicts < <(git diff --name-only --diff-filter=U)
|
||||
if [ "${#remaining_conflicts[@]}" -ne 0 ]; then
|
||||
printf 'Cleanup conflict auto-resolution left unresolved paths:\n' >&2
|
||||
for conflicted_path in "${remaining_conflicts[@]}"; do
|
||||
printf ' - %s\n' "${conflicted_path}" >&2
|
||||
done
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
require_repo_state() {
|
||||
if ! git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||
fail "This helper must be run inside a git repository."
|
||||
fi
|
||||
|
||||
if ! git show-ref --verify --quiet "refs/heads/${DEV_BRANCH}"; then
|
||||
fail "Local branch '${DEV_BRANCH}' does not exist."
|
||||
fi
|
||||
|
||||
if ! git show-ref --verify --quiet "refs/heads/${MAIN_BRANCH}"; then
|
||||
fail "Local branch '${MAIN_BRANCH}' does not exist."
|
||||
fi
|
||||
|
||||
if ! git remote get-url "${ORIGIN_REMOTE}" >/dev/null 2>&1; then
|
||||
fail "Remote '${ORIGIN_REMOTE}' is not configured."
|
||||
fi
|
||||
}
|
||||
|
||||
require_dev_checkout() {
|
||||
CURRENT_BRANCH="$(git rev-parse --abbrev-ref HEAD)"
|
||||
if [ "${CURRENT_BRANCH}" != "${DEV_BRANCH}" ]; then
|
||||
fail "Current branch is '${CURRENT_BRANCH}', but '${DEV_BRANCH}' is required."
|
||||
fi
|
||||
}
|
||||
|
||||
require_clean_worktree() {
|
||||
if [ -n "$(git status --porcelain)" ]; then
|
||||
fail "Local '${DEV_BRANCH}' branch is dirty. Commit, stash, or clean changes first."
|
||||
fi
|
||||
}
|
||||
|
||||
fetch_remote_state() {
|
||||
printf 'Fetching %s branch and tag state...\n' "${ORIGIN_REMOTE}"
|
||||
git fetch "${ORIGIN_REMOTE}" "${DEV_BRANCH}" "${MAIN_BRANCH}" --tags >/dev/null
|
||||
}
|
||||
|
||||
branch_divergence_counts() {
|
||||
local branch="$1"
|
||||
local remote_only=""
|
||||
local local_only=""
|
||||
|
||||
if ! git show-ref --verify --quiet "refs/remotes/${ORIGIN_REMOTE}/${branch}"; then
|
||||
fail "Remote branch '${ORIGIN_REMOTE}/${branch}' does not exist."
|
||||
fi
|
||||
|
||||
read -r remote_only local_only < <(
|
||||
git rev-list --left-right --count \
|
||||
"refs/remotes/${ORIGIN_REMOTE}/${branch}...refs/heads/${branch}"
|
||||
)
|
||||
|
||||
printf '%s %s\n' "${remote_only}" "${local_only}"
|
||||
}
|
||||
|
||||
fast_forward_branch_to_remote() {
|
||||
local branch="$1"
|
||||
local remote_ref="refs/remotes/${ORIGIN_REMOTE}/${branch}"
|
||||
local current_head=""
|
||||
|
||||
current_head="$(git rev-parse --abbrev-ref HEAD)"
|
||||
|
||||
printf "Fast-forwarding local branch '%s' to '%s/%s'...\n" \
|
||||
"${branch}" \
|
||||
"${ORIGIN_REMOTE}" \
|
||||
"${branch}"
|
||||
|
||||
if [ "${current_head}" = "${branch}" ]; then
|
||||
git merge --ff-only "${remote_ref}" >/dev/null
|
||||
return 0
|
||||
fi
|
||||
|
||||
git branch -f "${branch}" "${remote_ref}" >/dev/null
|
||||
}
|
||||
|
||||
sync_release_source_branch() {
|
||||
local branch="$1"
|
||||
local remote_only=""
|
||||
local local_only=""
|
||||
|
||||
read -r remote_only local_only < <(branch_divergence_counts "${branch}")
|
||||
|
||||
if [ "${remote_only}" -ne 0 ] && [ "${local_only}" -ne 0 ]; then
|
||||
fail "Local branch '${branch}' has diverged from '${ORIGIN_REMOTE}/${branch}' (${local_only} local-only commit(s), ${remote_only} remote-only commit(s)). Reconcile the branches first."
|
||||
fi
|
||||
|
||||
if [ "${remote_only}" -ne 0 ]; then
|
||||
fast_forward_branch_to_remote "${branch}"
|
||||
fi
|
||||
|
||||
if [ "${local_only}" -ne 0 ]; then
|
||||
printf "Notice: local branch '%s' is ahead of '%s/%s' by %s commit(s); release will use the local tip.\n" \
|
||||
"${branch}" \
|
||||
"${ORIGIN_REMOTE}" \
|
||||
"${branch}" \
|
||||
"${local_only}"
|
||||
fi
|
||||
}
|
||||
|
||||
sync_release_target_branch() {
|
||||
local branch="$1"
|
||||
local remote_only=""
|
||||
local local_only=""
|
||||
|
||||
read -r remote_only local_only < <(branch_divergence_counts "${branch}")
|
||||
|
||||
if [ "${remote_only}" -ne 0 ] && [ "${local_only}" -ne 0 ]; then
|
||||
fail "Local branch '${branch}' has diverged from '${ORIGIN_REMOTE}/${branch}' (${local_only} local-only commit(s), ${remote_only} remote-only commit(s)). Reconcile the branches first."
|
||||
fi
|
||||
|
||||
if [ "${local_only}" -ne 0 ]; then
|
||||
fail "Local branch '${branch}' is ahead of '${ORIGIN_REMOTE}/${branch}' by ${local_only} commit(s). Push or reconcile first so the release starts from the published ${branch} tip."
|
||||
fi
|
||||
|
||||
if [ "${remote_only}" -ne 0 ]; then
|
||||
fast_forward_branch_to_remote "${branch}"
|
||||
fi
|
||||
}
|
||||
|
||||
resolve_release_version() {
|
||||
local version_from_pyproject=""
|
||||
local version_from_constants=""
|
||||
|
||||
version_from_pyproject="$(
|
||||
sed -n 's/^version = "\(.*\)"$/\1/p' pyproject.toml | head -n 1
|
||||
)"
|
||||
version_from_constants="$(
|
||||
sed -n "s/^VERSION='\(.*\)'$/\1/p" src/ffx/constants.py | head -n 1
|
||||
)"
|
||||
|
||||
if [ -z "${version_from_pyproject}" ]; then
|
||||
fail "Could not resolve release version from pyproject.toml."
|
||||
fi
|
||||
|
||||
if [ -z "${version_from_constants}" ]; then
|
||||
fail "Could not resolve release version from src/ffx/constants.py."
|
||||
fi
|
||||
|
||||
if [ "${version_from_pyproject}" != "${version_from_constants}" ]; then
|
||||
fail "Version mismatch: pyproject.toml=${version_from_pyproject}, src/ffx/constants.py=${version_from_constants}."
|
||||
fi
|
||||
|
||||
printf '%s\n' "${version_from_pyproject}"
|
||||
}
|
||||
|
||||
require_release_tag_available() {
|
||||
local release_version="$1"
|
||||
local release_tag="v${release_version}"
|
||||
|
||||
if git rev-parse -q --verify "refs/tags/${release_tag}" >/dev/null 2>&1; then
|
||||
fail "Tag '${release_tag}' already exists."
|
||||
fi
|
||||
|
||||
if git rev-parse -q --verify "refs/tags/${release_version}" >/dev/null 2>&1; then
|
||||
fail "Bare tag '${release_version}' already exists; refusing to create ambiguous release tags."
|
||||
fi
|
||||
}
|
||||
|
||||
run_pre_release_tests() {
|
||||
if [ "${SKIP_TESTS}" -eq 1 ]; then
|
||||
printf 'Skipping pre-release tests.\n'
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [ ! -x "./tools/test.sh" ]; then
|
||||
fail "Missing executable test runner at ./tools/test.sh."
|
||||
fi
|
||||
|
||||
printf 'Running pre-release tests via ./tools/test.sh...\n'
|
||||
./tools/test.sh
|
||||
}
|
||||
|
||||
print_release_plan() {
|
||||
local release_version="$1"
|
||||
local release_tag="v${release_version}"
|
||||
local release_commit_message="Release ${release_tag}"
|
||||
|
||||
printf 'Dry run only. Planned steps:\n'
|
||||
printf '1. Ensure current branch is %s and the worktree is clean.\n' "${DEV_BRANCH}"
|
||||
printf '2. Fetch %s, fast-forward local %s and %s from %s when safe, and fail on divergence or unpublished local %s commits.\n' \
|
||||
"${ORIGIN_REMOTE}" \
|
||||
"${DEV_BRANCH}" \
|
||||
"${MAIN_BRANCH}" \
|
||||
"${ORIGIN_REMOTE}" \
|
||||
"${MAIN_BRANCH}"
|
||||
if [ "${SKIP_TESTS}" -eq 1 ]; then
|
||||
printf '3. Skip the pre-release test gate.\n'
|
||||
else
|
||||
printf '3. Run ./tools/test.sh as the pre-release test gate.\n'
|
||||
fi
|
||||
printf '4. Switch to %s and merge %s with --no-ff --no-commit.\n' "${MAIN_BRANCH}" "${DEV_BRANCH}"
|
||||
printf '5. Auto-resolve merge conflicts limited to release-cleanup paths and remove them from %s:\n' "${MAIN_BRANCH}"
|
||||
local cleanup_path=""
|
||||
for cleanup_path in "${AGENT_DEVELOPMENT_PATHS[@]}"; do
|
||||
printf ' - %s\n' "${cleanup_path}"
|
||||
done
|
||||
printf '6. Create merge commit: %s\n' "${release_commit_message}"
|
||||
printf '7. Create annotated tag: %s\n' "${release_tag}"
|
||||
printf '8. Push %s to %s/%s with --follow-tags.\n' "${MAIN_BRANCH}" "${ORIGIN_REMOTE}" "${MAIN_BRANCH}"
|
||||
printf '9. Switch back to %s.\n' "${DEV_BRANCH}"
|
||||
}
|
||||
|
||||
trap 'cleanup $?' EXIT
|
||||
|
||||
while [ "$#" -gt 0 ]; do
|
||||
case "$1" in
|
||||
--yes)
|
||||
ASSUME_YES=1
|
||||
;;
|
||||
--dry-run)
|
||||
DRY_RUN=1
|
||||
;;
|
||||
--skip-tests)
|
||||
SKIP_TESTS=1
|
||||
;;
|
||||
--help|-h)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
usage >&2
|
||||
fail "Unknown option: $1"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
load_cleanup_paths
|
||||
require_repo_state
|
||||
require_dev_checkout
|
||||
require_clean_worktree
|
||||
fetch_remote_state
|
||||
sync_release_source_branch "${DEV_BRANCH}"
|
||||
sync_release_target_branch "${MAIN_BRANCH}"
|
||||
|
||||
RELEASE_VERSION="$(resolve_release_version)"
|
||||
RELEASE_TAG="v${RELEASE_VERSION}"
|
||||
RELEASE_COMMIT_MESSAGE="Release ${RELEASE_TAG}"
|
||||
require_release_tag_available "${RELEASE_VERSION}"
|
||||
|
||||
printf 'This will merge %s into %s, remove agent-development files on %s,\n' "${DEV_BRANCH}" "${MAIN_BRANCH}" "${MAIN_BRANCH}"
|
||||
printf 'auto-resolve cleanup-path conflicts, run the pre-release gate%s, create %s,\n' \
|
||||
"$([ "${SKIP_TESTS}" -eq 1 ] && printf ' (skipped)' || printf '')" \
|
||||
"${RELEASE_TAG}"
|
||||
printf 'push to %s/%s, and switch back to %s.\n' \
|
||||
"${ORIGIN_REMOTE}" \
|
||||
"${MAIN_BRANCH}" \
|
||||
"${DEV_BRANCH}"
|
||||
|
||||
if [ "${ASSUME_YES}" -ne 1 ]; then
|
||||
printf 'Are you sure? [y/N] '
|
||||
read -r confirmation
|
||||
case "${confirmation}" in
|
||||
y|Y|yes|YES)
|
||||
;;
|
||||
*)
|
||||
fail "Aborted by user."
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
if [ "${DRY_RUN}" -eq 1 ]; then
|
||||
print_release_plan "${RELEASE_VERSION}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
run_pre_release_tests
|
||||
require_clean_worktree
|
||||
fetch_remote_state
|
||||
sync_release_source_branch "${DEV_BRANCH}"
|
||||
sync_release_target_branch "${MAIN_BRANCH}"
|
||||
require_release_tag_available "${RELEASE_VERSION}"
|
||||
|
||||
git switch "${MAIN_BRANCH}" >/dev/null
|
||||
CURRENT_BRANCH="${MAIN_BRANCH}"
|
||||
|
||||
printf 'Merging %s into %s...\n' "${DEV_BRANCH}" "${MAIN_BRANCH}"
|
||||
if ! git merge --no-ff --no-commit "${DEV_BRANCH}"; then
|
||||
if ! auto_resolve_cleanup_conflicts; then
|
||||
fail "Merge from '${DEV_BRANCH}' into '${MAIN_BRANCH}' failed."
|
||||
fi
|
||||
fi
|
||||
|
||||
if ! git rev-parse -q --verify MERGE_HEAD >/dev/null 2>&1; then
|
||||
fail "'${MAIN_BRANCH}' is already up to date with '${DEV_BRANCH}'. Nothing to merge."
|
||||
fi
|
||||
|
||||
printf 'Removing agent-development files from %s...\n' "${MAIN_BRANCH}"
|
||||
git rm -r -f --ignore-unmatch "${AGENT_DEVELOPMENT_PATHS[@]}" >/dev/null
|
||||
|
||||
if git diff --cached --quiet; then
|
||||
fail "No staged changes are present after merging '${DEV_BRANCH}' into '${MAIN_BRANCH}'."
|
||||
fi
|
||||
|
||||
printf 'Creating release merge commit: %s\n' "${RELEASE_COMMIT_MESSAGE}"
|
||||
git commit -m "${RELEASE_COMMIT_MESSAGE}"
|
||||
|
||||
printf 'Creating annotated tag: %s\n' "${RELEASE_TAG}"
|
||||
git tag -a "${RELEASE_TAG}" -m "FFX ${RELEASE_VERSION}"
|
||||
|
||||
printf 'Pushing %s and annotated tags to %s...\n' "${MAIN_BRANCH}" "${ORIGIN_REMOTE}"
|
||||
git push "${ORIGIN_REMOTE}" "${MAIN_BRANCH}" --follow-tags
|
||||
|
||||
printf 'Switching back to %s...\n' "${DEV_BRANCH}"
|
||||
git switch "${DEV_BRANCH}" >/dev/null
|
||||
CURRENT_BRANCH="${DEV_BRANCH}"
|
||||
|
||||
printf 'Release merge complete: %s pushed to %s/%s and tagged as %s.\n' \
|
||||
"${RELEASE_COMMIT_MESSAGE}" \
|
||||
"${ORIGIN_REMOTE}" \
|
||||
"${MAIN_BRANCH}" \
|
||||
"${RELEASE_TAG}"
|
||||
Reference in New Issue
Block a user