48 Commits

Author SHA1 Message Date
Javanaut
2e2c94f539 Release v0.4.2 2026-04-24 13:40:37 +02:00
Javanaut
12be6e985a v0.4.2 2026-04-24 13:39:57 +02:00
Javanaut
12310942ae Fix inspect attachment subtracks 2026-04-24 10:34:43 +02:00
Javanaut
f913cb4fe3 ff 2026-04-24 08:49:48 +02:00
Javanaut
0a153280e3 ff 2026-04-24 08:49:30 +02:00
Javanaut
6ca0cd54b0 addendum 2026-04-23 22:16:03 +02:00
Javanaut
14c956b6fa Release v0.4.1 2026-04-23 22:10:04 +02:00
Javanaut
502a822bb4 prep 0.4.1 2026-04-23 22:09:36 +02:00
Javanaut
6cc21b5f36 Adds diagnostics/remedy system 2026-04-23 20:32:49 +02:00
Javanaut
0034f8ca97 ff 2026-04-23 16:37:47 +02:00
Javanaut
eedcbaed0a Merge branch 'dev' of gitea.maveno.de:Javanaut/ffx into dev 2026-04-23 16:31:19 +02:00
Javanaut
653ce7b417 Copy audio and video flags 2026-04-23 16:30:15 +02:00
Javanaut
b80c055826 fix table 2026-04-17 13:17:15 +02:00
Javanaut
c5fc6ac13d fix styled ASS and refactor att format 2026-04-17 11:41:13 +02:00
Javanaut
fea8ea4b70 Release v0.3.1 2026-04-16 19:44:07 +02:00
Javanaut
1bead05d19 ff 2026-04-16 19:36:40 +02:00
Javanaut
9fe2a842e9 ff 2026-04-16 19:32:41 +02:00
Javanaut
849d03d054 v0.3.1 2026-04-16 19:26:17 +02:00
Javanaut
3a87bbbba6 Anpassung --cut flag 2026-04-16 19:02:57 +02:00
Javanaut
ab5e8e53e1 Fix debug title 2026-04-16 18:32:07 +02:00
Javanaut
0ab2408444 Fix h265 format 2026-04-16 18:20:17 +02:00
Javanaut
bc1e0889e7 Fix inspect details screen 2026-04-16 18:10:39 +02:00
Javanaut
6dfbe1022a Merge branch 'dev' of gitea.maveno.de:Javanaut/ffx into dev 2026-04-15 15:50:27 +02:00
Javanaut
d3d2de8a0d adds scratchpad points 2026-04-15 15:50:24 +02:00
Javanaut
0728ece4b8 Fix h265 subtrack unmux 2026-04-15 00:03:17 +02:00
Javanaut
02e375fbf2 nnn 2026-04-14 19:08:29 +02:00
Javanaut
14e6ce8458 Fix logging 2026-04-14 10:04:39 +02:00
Javanaut
d314b6024d Release v0.3.0 2026-04-14 00:56:21 +02:00
Javanaut
d921629947 v0.3.0 2026-04-14 00:55:42 +02:00
Javanaut
65490e2a7f ff 2026-04-14 00:44:43 +02:00
Javanaut
6c5b518e4d ffn 2026-04-14 00:26:16 +02:00
Javanaut
e3c18f22d4 Adds UI tweaks nightly 2026-04-13 23:11:14 +02:00
Javanaut
57185c7f10 Adds missing codecs 2026-04-13 20:15:10 +02:00
Javanaut
1ff9ecd4b6 Release v0.2.6 2026-04-13 20:06:34 +02:00
Javanaut
037388886e prep 0.2.6 2026-04-13 20:04:25 +02:00
Javanaut
e614ca5d75 Splits screen classes 2026-04-13 14:57:13 +02:00
Javanaut
c0b3977ea6 iteration1 2026-04-13 13:16:33 +02:00
Javanaut
d9639561ce Fix TUI widgets color bleedthru 2026-04-13 12:00:38 +02:00
Javanaut
cbf43e5d6c adapt shift output 2026-04-12 20:41:31 +02:00
Javanaut
d6e885517d Adds inspect --shift option 2026-04-12 20:34:33 +02:00
Javanaut
2593c95b5c Release v0.2.5 2026-04-12 19:58:30 +02:00
Javanaut
8a8c43ecdf v0.2.5 2026-04-12 19:57:46 +02:00
Javanaut
6170ac641c ff 2026-04-12 19:35:03 +02:00
Javanaut
497c0e500b ff 2026-04-12 19:34:51 +02:00
Javanaut
12509cd4e2 Release v0.2.4 2026-04-12 12:28:37 +02:00
Javanaut
3df11be5e9 upd .gitignore 2026-04-12 12:24:19 +02:00
Javanaut
72c735c3ee ffn 2026-04-09 01:06:32 +02:00
Javanaut
381a62046b nightly 2026-04-09 01:04:47 +02:00
98 changed files with 11502 additions and 2976 deletions

4
.gitignore vendored
View File

@@ -10,6 +10,8 @@ tools/ansible/inventory/group_vars/all.yml
ffx_test_report.log
bin/conversiontest.py
tests/assets/
build/
dist/
*.egg-info/
@@ -20,4 +22,6 @@ venv/
*.mkv
*.webm
*.mp4
ffmpeg2pass-0.log
*.sup

376
AGENTS.md
View File

@@ -1,376 +0,0 @@
# AGENTS.md
This file is the entry point for agent guidance in this repository.
It is intentionally generic and reusable across projects. Keep this file focused on non-project-specific constraints, working style, and the structure used to link more detailed guidance.
# Purpose
- Provide a small default rule set for agents working in this repository.
- Keep the base guidance modular and easy to extend.
- Separate reusable agent behavior from project-specific requirements.
# Comment Syntax
- A segment wrapped in `<!--` and `-->` is a comment and must be ignored by agents.
- Use HTML comments for optional guidance that should stay inactive until enabled.
- To enable an optional segment, remove the surrounding `<!--` and `-->` markers.
# Core Principles
- Prefer the simplest solution that satisfies the current goal.
- Keep guidance lightweight: only add detail when it meaningfully improves outcomes.
- Reuse modular guideline files instead of expanding this file indefinitely.
- Treat project-specific documents as the source of truth for project behavior.
- When guidance conflicts, use the most specific applicable document.
# Rule Terms
- A `rule` is the general term for any constraint, requirement, definition, or similar guidance item.
- A `rule set` addresses all rules inside one file that share the same rule set ID.
- Any rule inside a rule set shall use an ID following the schema `RULESET-0001`, `RULESET-0002`, and so on.
- Rules without a rule set ID are also valid, but they are not addressable by rule ID.
# Scope Of This File
This file should contain:
- Generic agent behavior and constraints.
- Rules that are reusable across multiple projects.
- Links to optional guideline modules.
- Links to project-specific requirements.
- Commented optional templates for released-product documentation and agent-output locations.
This file should not contain:
- Project business requirements.
- Project architecture decisions.
- Stack-specific implementation details unless they are universally applicable.
- Task-specific runbooks that belong in dedicated modules.
# Default Agent Behavior
- Read the relevant context before making changes.
- Prefer small, understandable edits over broad refactors.
- Preserve existing patterns unless there is a clear reason to change them.
- Document assumptions when context is missing.
- Ignore HTML comment segments.
- If a more specific enabled guideline exists for the current task, follow it.
# Guideline Structure
Use the following structure for reusable guidance files and project-specific documentation as needed:
```text
/
|-- AGENTS.md
|-- guidance/
| |-- stacks/
| |-- conventions/
| `-- workflows/
|-- prompts/
`-- requirements/
Optional files and directories
|-- SCRATCHPAD.md
|-- docs/
| |-- readme.md
| |-- installation.md
| `-- history.md
|-- process/
| |-- log.md
| `-- coding-handbook.md
```
# Optional Reusable Modules
Add files under `guidance/` only when they are needed.
# Optional Scratchpad
- `SCRATCHPAD.md` is an optional repo-root scratchpad for temporary
information aimed at the next iteration.
- Developers may create or delete `SCRATCHPAD.md` at any time.
- Developers may refer to `SCRATCHPAD.md` as `scratchpad` when giving agents a
source or target for information.
- Agents may read, update, create, or remove the scratchpad when the task
explicitly calls for it.
- Treat the scratchpad as low-formality working context rather than canonical
project truth.
- Use the scratchpad for short-lived notes, open questions, sketches, and
temporary decisions that should be resolved away.
- Move durable outcomes into `requirements/`, `guidance/`, code, tests, or
another long-lived location.
- If `SCRATCHPAD.md` is absent, agents should continue normally.
# Optional Rule Sets
- Optional rule sets may be stored in `guidance/optional/` or in `guidance/{section}/optional/`.
- Optional rule sets are inactive by default and shall only be applied when a prompt explicitly requests them, for example by phrases such as `Apply rules for lean interface iteration in the following steps.` or `Apply LII rules.`
- An optional rule set may be requested by its descriptive name, by its rule set ID, or by another equally clear explicit reference.
- Agents shall never infer or auto-enable optional rule sets from general intent alone.
- If an optional rule or rule set cannot be identified and addressed clearly, agents shall stop and ask before proceeding.
# Prepared Orders
- An `order` is a prepared prompt for one isolated operation rather than a general workflow or standing rule set.
- Orders shall be stored under `prompts/`.
- Order files shall use the naming schema `ORDER-0001-<slug>.md`, `ORDER-0002-<slug>.md`, and so on.
- The canonical order identifier is the `ORDER-0001` style prefix. The trailing slug is descriptive only.
- Recommended internal order file structure is: prompt ID, prompt name, purpose, trigger examples, scope, operation, and expected output.
- Orders shall only be executed when they are explicitly requested by a prompt such as `Execute ORDER-0007.` or `Execute ORDER 7.`
- Agents may accept an unambiguous short numeric reference such as `ORDER 7` as an alias for `ORDER-0007`.
- If an order cannot be identified uniquely and clearly, agents shall stop and ask before proceeding.
# Toolstack Guides
Location:
```text
guidance/stacks/
```
Examples:
- `guidance/stacks/python.md`
- `guidance/stacks/typescript.md`
- `guidance/stacks/docker.md`
- `guidance/stacks/terraform.md`
Use for:
- Language or framework expectations.
- Tooling and environment conventions.
- Build, test, and runtime guidance tied to a specific stack.
# Coding Conventions
Location:
```text
guidance/conventions/
```
Examples:
- `guidance/conventions/naming.md`
- `guidance/conventions/testing.md`
- `guidance/conventions/review.md`
Use for:
- Naming and structure conventions.
- Testing expectations.
- Code review and quality rules.
# Recurring Workflows
Location:
```text
guidance/workflows/
```
Examples:
- `guidance/workflows/feature-delivery.md`
- `guidance/workflows/bugfix.md`
- `guidance/workflows/release.md`
- `guidance/workflows/incident-response.md`
Use for:
- Repeatable task flows.
- Checklists for common delivery work.
- Operational or maintenance procedures.
<!-- Enable this optional section by removing the outer HTML comment markers from this segment
when you want agents to create, update, and consult released-product
documentation in `docs/`.
# Released Product Documentation
Released-product documentation should live outside the generic sections above.
Recommended location:
```text
docs/
```
Examples:
- `docs/readme.md`
- `docs/installation.md`
- `docs/history.md`
Agent rules for docs output:
- Keep content compact but comprehensive.
- Write for end users, operators, or other consumers of the released product.
- Prefer shipped behavior, supported workflows, and stable terminology over
internal implementation detail.
- Keep documentation synchronized with released behavior.
- Update release history when user-visible changes are shipped.
Recommended topics:
- Product overview and intended use.
- Installation, configuration, and upgrade guidance.
- Usage patterns, operational instructions, and support boundaries.
- Compatibility notes, migration notes, and release history.
- Troubleshooting and common pitfalls when relevant. -->
<!-- Enable this optional section by removing the outer HTML comment markers from this
segment when you want agents to produce and consult workflow output in `process/`.
# Agent Output In `process/`
The `process/` directory is primarily for agent output created during
delivery, maintenance, and review work.
Recommended location:
```text
process/
```
Agent rules for process output:
- Use `process/` for agent-produced artifacts rather than released-product
documentation.
- Keep entries concise, traceable, and tied to resulting changes.
- Treat `process/` as workflow output, not as the primary source of product
truth.
- Prefer summaries and rationale over raw transcript dumps unless a workflow
explicitly requires full prompt history.
# Agent Change Log
Location:
```text
process/log.md
```
Use for:
- Capturing prompts given to agents.
- Recording concise explanations of the resulting changes made by agents.
- Preserving task-by-task rationale, decisions, and implementation notes.
# Coding Handbook
Location:
```text
process/coding-handbook.md
```
Use for:
- A tutorial-style handbook that explains the programming components used in
the project.
- Compact but comprehensive technical onboarding material for future
contributors.
- Written explanations that connect code structure, concepts, and
implementation patterns. -->
# Project-Specific Requirements
Project-specific material should live outside the generic sections above.
Recommended location:
```text
requirements/
```
Examples:
- `requirements/project.md`
- `requirements/architecture.md`
- `requirements/decisions.md`
- `requirements/domain.md`
Use for:
- Product and business requirements.
- Project goals and constraints.
- Architecture and design decisions.
- Domain knowledge that is specific to this repository.
# Agent-Level Variables
When present, `requirements/identifiers.yml` is an optional project-specific
input that defines agent-level variables for use inside `requirements/` and
`guidance/`.
Variable schema:
- Use `@{VARIABLE_NAME}` for agent-level variables.
- Prefer uppercase snake case names such as `@{PROJECT_ID}` or `@{VENDOR_ID}`.
- Do not treat `${...}` as an agent-level variable form; that syntax may appear
in Bash or other code and should not be interpreted as agent metadata.
Scope:
- The effective scope of `requirements/identifiers.yml` is limited to
`requirements/` and `guidance/`.
- Definitions from `requirements/identifiers.yml` must not leak into product code.
Defaults:
- Default `@{VENDOR_ID}` is `osgw`.
- Default `@{PROJECT_ID}` is the current repository directory name.
Resolution rules:
- Treat `requirements/identifiers.yml` as optional; when it is absent, agents
may still resolve the defaults defined above.
- If a variable is used in `requirements/` or `guidance/` and it is not
defined in `requirements/identifiers.yml` and does not have a default in this
file, agents may stop and report the undefined variable.
- Prefer updating duplicated identifier values in `requirements/` and
`guidance/` to use the variable schema when that improves consistency.
# Precedence
Some precedence levels may be absent because optional levels can remain inside
HTML comments. The smaller numeric index wins.
Apply guidance in this order:
1. Direct user or task instructions.
2. Project-specific documents in `requirements/`.
<!-- 3. Released-product documentation in `docs/` when shipped behavior or
user-facing expectations are relevant. -->
4. Relevant modular guides in `guidance/stacks/`, `guidance/conventions/`, or `guidance/workflows/`.
<!-- 5. Agent output in `process/` when prior prompts, rationale, or
implementation notes are relevant. -->
6. This `AGENTS.md`.
# Maintenance
- Keep this file short and stable.
- Move detail into dedicated modules when a section becomes too specific or too long.
- Add new guideline files only when they solve a recurring need.
- Remove outdated references when the repository structure changes.
# Current Status
This repository defines the base `AGENTS.md` structure plus project-specific
requirements and modular guidance.
Future project work can add:
- Reusable modules under `guidance/`
- Project-specific documentation under `requirements/`
- Optional temporary iteration context in `SCRATCHPAD.md`
- Optional released-product documentation under `docs/` by uncommenting its segment
- Optional agent output under `process/` by uncommenting its segment
- Cross-references from this file once those documents exist

View File

@@ -99,6 +99,52 @@ TMDB-backed metadata enrichment requires `TMDB_API_KEY` to be set in the environ
## Version History
### 0.4.2
- pattern details now show an inline `Show: <quality>` hint next to the quality field when the pattern itself has no stored quality but the selected show does
- inspect stream tables now show attachment format labels like `TTF` in the codec column and keep attachment language cells blank instead of showing an undefined language
- ffmpeg damaged-MP3 diagnostics now recognize additional corruption lines such as `invalid new backstep`, keeping them grouped under the `warn-corrupt-mpeg-audio` review summary
### 0.4.1
- `convert` now supports `--copy-video` and `--copy-audio` to keep the selected stream type in copy mode without applying the corresponding reencode flags, filters, or formatting options
- ffmpeg conversions now monitor diagnostics while the process is running, retry unset AVI packet timestamps once with `-fflags +genpts`, and stop early when a file should be skipped instead of waiting for the full job to finish
- end-of-run convert summaries now list only ffmpeg findings that still require review, including named remedy identifiers such as `warn-corrupt-mpeg-audio`
- `upgrade` now finishes by reporting the installed FFX version together with the active bundle branch
### 0.3.1
- debug mode screen titles now append the active Textual screen class name, making screen-specific troubleshooting easier during inspect and edit flows
- `--cut` again works as a combined flag/option: omitted disables cutting, bare `--cut` applies the default `60,180`, and explicit duration or `START,DURATION` values stay supported
- H.265 unmux commands no longer force an invalid `-f h265` output format, keeping ffmpeg copy extraction aligned with the required Annex B bitstream filter
- H.264 encoding now falls back from `libx264` to `libopenh264` with a warning when needed, and the test fixtures use the same encoder fallback so the suite remains portable across ffmpeg builds
### 0.3.0
- inspect and edit screens now refresh nested track and pattern changes more reliably, with inspect-mode tables aligned to the target pattern view shown in the differences pane
- metadata editing got a follow-up polish pass with clearer ffmpeg notifications, a shared in-screen log pane, safer apply/reload handling, and expanded cleanup and normalization coverage
- track and asset probing recognize additional codecs, and the modern test suite now covers more metadata-editor, change-set, screen-state, and asset-probe behavior
- Textual now requires version `8.0` or newer to match the UI APIs used by the current screens
### 0.2.6
- DB-free `ffx edit` workflow for in-place metadata editing via temporary-file rewrite
- inspect and edit workflows split into dedicated Textual screens with shared media-workflow support
- Textual tables and row actions now separate raw data from rendered labels to avoid markup leaking into stored metadata
- responsive screen layout pass, `Esc` back handling, sortable show/inspect tables, and improved edit-screen notifications/toggles
- application-wide UTF-8 i18n catalogs with language precedence from CLI over config over system over German default
- metadata normalization extended for localized subtitle titles, ISO language cleanup, and smarter track editor language/title helpers
### 0.2.5
- show-level quality and notes fields
- pattern-over-show-over-default season-shift resolution with dynamic DB migration loading
- migration prompt now reports the upgrade path and creates an in-place DB backup before applying schema changes
- `upgrade --branch <name>` now fetches remote-only branches before switching
- `unmux` now applies season shifting to subtitle output filenames
- convert now keeps DB-defined target subtitle dispositions authoritative over sidecar filename disposition flags when a pattern definition exists
- focused modern tests added around migrations, unmux, upgrade, and subtitle-disposition import precedence
### 0.2.4
- lightweight CLI commands now stay import-light via lazy runtime loading

View File

@@ -1,128 +0,0 @@
# Scratchpad
## Goal
- Capture a compact, project-wide list of optimization candidates after a broad scan of the current FFX codebase, tooling, and requirements.
## Settled
- The biggest near-term wins are in startup cost, repeated subprocess work, repeated database query patterns, and general repo hygiene.
- This list is intentionally optimization-oriented rather than bug-oriented. Some items below also improve correctness or maintainability, but they were selected because they can reduce runtime cost, operator friction, or iteration overhead.
- A first modern integration slice now exists under [`tests/integration/subtrack_mapping`](/home/osgw/.local/src/codex/ffx/tests/integration/subtrack_mapping). Remaining test-suite cleanup is now mostly about migrating and shrinking the legacy harness surface under [`tests/legacy`](/home/osgw/.local/src/codex/ffx/tests/legacy).
- Shared CLI defaults for container/output tokens now live outside [`src/ffx/ffx_controller.py`](/home/osgw/.local/src/codex/ffx/src/ffx/ffx_controller.py), and a focused unit test locks in the lazy-import contract.
- Helper filename and rich-text utilities now use compiled raw regexes plus translate-based filename filtering, with unit coverage for TMDB suffix rewriting and Rich color stripping.
- Process resource limiting now has explicit disabled/default states in the CLI and requirements, and combined CPU-plus-niceness wrapping now executes as `cpulimit -- nice -n ... <command>` instead of a less explicit prefix chain.
- FFX logger setup now reuses named handlers, and fallback logger access no longer mutates handlers in ordinary constructors and helpers.
- The process wrapper now uses `subprocess.run(...)` with centralized command formatting plus stable timeout and missing-command error mapping.
- Pattern matching now uses cached compiled regexes plus explicit duplicate-match errors, and pattern creation flows no longer persist zero-track patterns.
## Focused Snapshot
- Highest-leverage application optimizations:
- Decide whether placeholder help/settings screens should ship or disappear.
- Trim dead helpers and other dormant surface that still looks active.
- Highest-leverage repo and workflow optimizations:
- Continue migrating the oversized legacy test/combinator surface into focused modern tests so it is easier to run, debug, and extend.
## Optimization Candidates
1. Placeholder UI surfaces should either ship or disappear
- [`src/ffx/help_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/help_screen.py) and [`src/ffx/settings_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/settings_screen.py) are placeholders.
- Optimization:
- Either remove them from the active UI surface or complete them.
- Avoid paying ongoing maintenance cost for unfinished navigation targets.
- Expected value:
- Leaner interface.
- Lower UX ambiguity.
2. Several helper functions are unfinished or dead-weight
- [`src/ffx/helper.py`](/home/osgw/.local/src/codex/ffx/src/ffx/helper.py) contains `permutateList(...): pass`.
- There are many combinator and conversion placeholders across tests and migrations.
- Optimization:
- Remove dead code, finish it, or isolate it behind a clearly dormant area.
- Avoid carrying stubbed utility surface that looks reusable but is not.
- Expected value:
- Smaller mental model.
- Less time spent re-evaluating inactive paths.
3. Test suite shape is expensive to understand and likely expensive to run
- The project still carries a large legacy matrix of combinator files under [`tests/legacy`](/home/osgw/.local/src/codex/ffx/tests/legacy), several placeholder `pass` implementations, and at least one suspicious filename with an embedded space: [`tests/legacy/disposition_combinator_2_3 .py`](/home/osgw/.local/src/codex/ffx/tests/legacy/disposition_combinator_2_3 .py).
- A first focused replacement slice now exists in [`tests/integration/subtrack_mapping/test_cli_bundle.py`](/home/osgw/.local/src/codex/ffx/tests/integration/subtrack_mapping/test_cli_bundle.py), so the remaining work is migration and consolidation rather than creating the modern test shape from scratch.
- Optimization:
- Continue replacing broad combinator matrices with focused parametrized integration and unit tests.
- Retire the bespoke legacy discovery and runner path once equivalent coverage exists.
- Normalize file naming and test discovery conventions.
- Expected value:
- Faster contributor onboarding.
- Easier CI adoption later.
## Open
- Should optimization work focus first on operator-perceived latency, internal maintainability, or correctness-risk cleanup that also has performance upside?
- Is the long-term supported model still “local Linux workstation plus Textual UI,” or should optimization decisions bias toward a more scriptable/headless CLI?
## Gaps Right Now
- No explicit prioritization owner or milestone for the optimization backlog.
- No benchmark or timing harness exists for startup, probe, DB, or conversion orchestration overhead.
- Repo hygiene is still mixed with generated artifacts and some clearly unfinished files.
- The legacy TMDB-backed `Scenario 4` path is currently blocked by a pattern/track regression: `Patterns must define at least one track before they can be stored.` This surfaced while rerunning TMDB-dependent checks after the zero-track pattern hardening.
## Next
1. Triage the list into quick wins, medium refactors, and long-horizon cleanup.
2. Tackle the cheapest remaining product-surface cleanup first:
- placeholder UI surfaces and dead helper cleanup.
3. Continue replacing oversized legacy test matrices with focused modern integration and unit coverage.
4. Triage the legacy `Scenario 4` pattern/track failure and decide whether to fix the harness, adapt it to the zero-track guard, or retire that path during the ongoing test-suite migration.
## Shifted Season Status (2026-04-12)
- Current assessment:
- The shifted-season subsystem is present end to end and looks feature-complete in shape, but it is not yet hardened.
- The storage, TUI CRUD surface, and CLI/TMDB filename application path all exist, so this is no longer a stubbed or half-started area.
- The main gap is correctness and direct verification rather than missing surface area.
- Implemented surface confirmed:
- Requirements still treat shifted seasons as part of the accepted product surface in [`requirements/project.md`](/home/osgw/.local/src/codex/ffx/requirements/project.md) and [`requirements/architecture.md`](/home/osgw/.local/src/codex/ffx/requirements/architecture.md).
- Persistence exists via [`src/ffx/model/shifted_season.py`](/home/osgw/.local/src/codex/ffx/src/ffx/model/shifted_season.py) plus the `Show.shifted_seasons` relationship in [`src/ffx/model/show.py`](/home/osgw/.local/src/codex/ffx/src/ffx/model/show.py).
- CRUD logic exists in [`src/ffx/shifted_season_controller.py`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_controller.py).
- Textual add/edit/delete flows are wired through [`src/ffx/shifted_season_details_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_details_screen.py), [`src/ffx/shifted_season_delete_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_delete_screen.py), and the show details table in [`src/ffx/show_details_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/show_details_screen.py).
- CLI conversion applies season shifts before TMDB lookup and output suffix generation in [`src/ffx/cli.py`](/home/osgw/.local/src/codex/ffx/src/ffx/cli.py).
- Verified current behavior:
- `~/.local/share/ffx.venv/bin/python -m unittest discover -s tests/unit -p 'test_*.py'` passed on 2026-04-12: `75` tests in `0.795s`.
- That run emitted `ResourceWarning` messages for unclosed SQLite connections, so the suite is green but not perfectly clean.
- There is almost no direct shifted-season coverage in the modern tests:
- [`tests/unit/test_cli_rename_only.py`](/home/osgw/.local/src/codex/ffx/tests/unit/test_cli_rename_only.py) stubs `ShiftedSeasonController` rather than exercising it.
- [`tests/unit/test_screen_support.py`](/home/osgw/.local/src/codex/ffx/tests/unit/test_screen_support.py) only verifies controller bootstrap wiring.
- Net effect: the subsystem is integrated, but its core rules are effectively untested by the current modern suite.
- Reproduced correctness gaps:
- Overlap validation is broken in [`src/ffx/shifted_season_controller.py:41`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_controller.py:41) because `getOriginalSeason` is compared as a method object instead of being called.
- Reproduction on 2026-04-12 with a temp SQLite DB:
- Added `S1 E1-E10`.
- `checkShiftedSeason(...)` incorrectly returned `True` for overlapping `S1 E5-E15`.
- `addShiftedSeason(...)` then stored the overlapping row successfully.
- `updateShiftedSeason(...)` in [`src/ffx/shifted_season_controller.py:93`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_controller.py:93) does not enforce episode ordering, so an invalid range like `first_episode=20`, `last_episode=10` was accepted in the same reproduction.
- Because [`src/ffx/shifted_season_controller.py:213`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_controller.py:213) returns the first matching sibling and [`src/ffx/shifted_season_controller.py:163`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_controller.py:163) applies no explicit ordering, overlapping rows would also make runtime shifting ambiguous.
- Progress summary:
- Good progress:
- The subsystem exists across requirements, schema, UI, and conversion flow.
- It appears fully integrated into the show-editing workflow rather than parked as dead code.
- Incomplete progress:
- Validation logic is not trustworthy yet.
- Modern tests do not currently protect the subsystem's real behavior.
- User-facing error feedback in the shifted-season screens still has placeholder `#TODO: Meldung` branches.
- Recommended next slice:
1. Add direct controller tests for overlap rejection, episode-order validation, and `shiftSeason(...)` selection behavior.
2. Fix `checkShiftedSeason(...)` and add the same range/order validation to `updateShiftedSeason(...)`.
3. Make sibling selection deterministic or enforce non-overlap strongly enough that ordering no longer matters in practice.
4. Add at least one focused integration test that proves a stored shifted season changes TMDB lookup and/or generated filename numbering during conversion.
## Delete When
- Delete this scratchpad once the optimization backlog is either converted into issues/work items or distilled into durable project guidance.

View File

@@ -1,4 +1,5 @@
{
"language": {{ language_json }},
"databasePath": {{ database_path_json }},
"logDirectory": {{ log_directory_json }},
"subtitlesDirectory": {{ subtitles_directory_json }},

361
assets/i18n/de.json Normal file
View File

@@ -0,0 +1,361 @@
{
"iso_languages": {
"ABKHAZIAN": "Abchasisch",
"AFAR": "Afar",
"AFRIKAANS": "Afrikaans",
"AKAN": "Akan",
"ALBANIAN": "Albanisch",
"AMHARIC": "Amharisch",
"ARABIC": "Arabisch",
"ARAGONESE": "Aragonesisch",
"ARMENIAN": "Armenisch",
"ASSAMESE": "Assamesisch",
"AVARIC": "Awarisch",
"AVESTAN": "Avestisch",
"AYMARA": "Aymara",
"AZERBAIJANI": "Aserbaidschanisch",
"BAMBARA": "Bambara",
"BASHKIR": "Baschkirisch",
"BASQUE": "Baskisch",
"BELARUSIAN": "Weißrussisch",
"BENGALI": "Bengalisch",
"BISLAMA": "Bislama",
"BOKMAL": "Bokmål",
"BOSNIAN": "Bosnisch",
"BRETON": "Bretonisch",
"BULGARIAN": "Bulgarisch",
"BURMESE": "Burmesisch",
"CATALAN": "Catalan",
"CHAMORRO": "Chamorro",
"CHECHEN": "Tschetschenisch",
"CHICHEWA": "Chichewa",
"CHINESE": "Chinesisch",
"CHURCH_SLAVIC": "Church Slavic",
"CHUVASH": "Tschuwaschisch",
"CORNISH": "Kornisch",
"CORSICAN": "Korsisch",
"CREE": "Cree",
"CROATIAN": "Kroatisch",
"CZECH": "Tschechisch",
"DANISH": "Dänisch",
"DIVEHI": "Divehi",
"DUTCH": "Dutch",
"DZONGKHA": "Dzongkha",
"ENGLISH": "Englisch",
"ESPERANTO": "Esperanto",
"ESTONIAN": "Estnisch",
"EWE": "Ewe-Sprache",
"FAROESE": "Färöisch",
"FIJIAN": "Fidschianisch",
"FILIPINO": "Filipino",
"FINNISH": "Finnisch",
"FRENCH": "Französisch",
"FULAH": "Ful",
"GALICIAN": "Galizisch",
"GANDA": "Ganda",
"GEORGIAN": "Georgisch",
"GERMAN": "Deutsch",
"GREEK": "Greek",
"GUARANI": "Guaraní",
"GUJARATI": "Gujarati",
"HAITIAN": "Haitian",
"HAUSA": "Haussa",
"HEBREW": "Hebräisch",
"HERERO": "Herero",
"HINDI": "Hindi",
"HIRI_MOTU": "Hiri-Motu",
"HUNGARIAN": "Ungarisch",
"ICELANDIC": "Isländisch",
"IDO": "Ido",
"IGBO": "Ibo",
"INDONESIAN": "Indonesisch",
"INTERLINGUA": "Interlingua",
"INTERLINGUE": "Interlingue",
"INUKTITUT": "Inuktitut",
"INUPIAQ": "Inupiaq",
"IRISH": "Irisch",
"ITALIAN": "Italienisch",
"JAPANESE": "Japanisch",
"JAVANESE": "Javanisch",
"KALAALLISUT": "Kalaallisut",
"KANNADA": "Kannada",
"KANURI": "Kanuri",
"KASHMIRI": "Kaschmirisch",
"KAZAKH": "Kasachisch",
"KHMER": "Khmer",
"KIKUYU": "Kikuyu",
"KINYARWANDA": "Kinyarwanda",
"KIRGHIZ": "Kirghiz",
"KOMI": "Komi",
"KONGO": "Kongo",
"KOREAN": "Koreanisch",
"KUANYAMA": "Kuanyama",
"KURDISH": "Kurdisch",
"LAO": "Laotisch",
"LATIN": "Lateinisch",
"LATVIAN": "Lettisch",
"LIMBURGAN": "Limburgan",
"LINGALA": "Lingala",
"LITHUANIAN": "Litauisch",
"LUBA_KATANGA": "Luba-Katanga",
"LUXEMBOURGISH": "Luxembourgish",
"MACEDONIAN": "Makedonisch",
"MALAGASY": "Malagasi",
"MALAY": "Malaiisch",
"MALAYALAM": "Malayalam",
"MALTESE": "Maltesisch",
"MANX": "Manx",
"MAORI": "Maori",
"MARATHI": "Marathi",
"MARSHALLESE": "Marschallesisch",
"MONGOLIAN": "Mongolisch",
"NAURU": "Nauru",
"NAVAJO": "Navajo",
"NDONGA": "Ndonga",
"NEPALI": "Nepali",
"NORTHERN_SAMI": "Nord-Samisch",
"NORTH_NDEBELE": "North Ndebele",
"NORWEGIAN": "Norwegisch",
"NORWEGIAN_NYNORSK": "Nynorsk",
"OCCITAN": "Occitan",
"OJIBWA": "Ojibwa",
"ORIYA": "Oriya",
"OROMO": "Oromo",
"OSSETIAN": "Ossetian",
"PALI": "Pali",
"PANJABI": "Panjabi",
"PERSIAN": "Persisch",
"POLISH": "Polnisch",
"PORTUGUESE": "Portugiesisch",
"PUSHTO": "Pushto",
"QUECHUA": "Quechua",
"ROMANIAN": "Romanian",
"ROMANSH": "Bündnerromanisch",
"RUNDI": "Kirundi",
"RUSSIAN": "Russisch",
"SAMOAN": "Samoanisch",
"SANGO": "Sango",
"SANSKRIT": "Sanskrit",
"SARDINIAN": "Sardisch",
"SCOTTISH_GAELIC": "Scottish Gaelic",
"SERBIAN": "Serbisch",
"SHONA": "Schona",
"SICHUAN_YI": "Sichuan Yi",
"SINDHI": "Sindhi",
"SINHALA": "Sinhala",
"SLOVAK": "Slowakisch",
"SLOVENIAN": "Slowenisch",
"SOMALI": "Somali",
"SOUTHERN_SOTHO": "Southern Sotho",
"SOUTH_NDEBELE": "South Ndebele",
"SPANISH": "Spanish",
"SUNDANESE": "Sundanesisch",
"SWAHILI": "Suaheli; Swaheli",
"SWATI": "Swazi",
"SWEDISH": "Schwedisch",
"TAGALOG": "Tagalog",
"TAHITIAN": "Tahitisch",
"TAJIK": "Tadschikisch",
"TAMIL": "Tamilisch",
"TATAR": "Tatarisch",
"TELUGU": "Telugu",
"THAI": "Thai",
"TIBETAN": "Tibetisch",
"TIGRINYA": "Tigrinja",
"TONGA": "Tonga",
"TSONGA": "Tsonga",
"TSWANA": "Tswana",
"TURKISH": "Türkisch",
"TURKMEN": "Turkmenisch",
"TWI": "Twi",
"UIGHUR": "Uighur",
"UKRAINIAN": "Ukrainisch",
"UNDEFINED": "undefined",
"URDU": "Urdu",
"UZBEK": "Usbekisch",
"VENDA": "Venda",
"VIETNAMESE": "Vietnamesisch",
"VOLAPUK": "Volapük",
"WALLOON": "Wallonisch",
"WELSH": "Walisisch",
"WESTERN_FRISIAN": "Westfriesisch",
"WOLOF": "Wolof",
"XHOSA": "Xhosa",
"YIDDISH": "Jiddisch",
"YORUBA": "Joruba",
"ZHUANG": "Zhuang",
"ZULU": "Zulu"
},
"phrases": {
"5.0(side)": "5.0(side)",
"5.1(side)": "5.1(side)",
"6.1": "6.1",
"6ch": "6ch",
"7.1": "7.1",
"<New show>": "<Neue Serie>",
"Add": "Hinzufügen",
"Add Pattern": "Muster hinzufügen",
"Apply": "Anwenden",
"Apply failed: {error}": "Anwenden fehlgeschlagen: {error}",
"Are you sure to delete the following filename pattern?": "Möchtest du das folgende Dateinamensmuster wirklich löschen?",
"Are you sure to delete the following shifted season?": "Möchtest du die folgende verschobene Staffel wirklich löschen?",
"Are you sure to delete the following show?": "Möchtest du die folgende Serie wirklich löschen?",
"Are you sure to delete the following {track_type} track?": "Möchtest du den folgenden {track_type}-Stream wirklich löschen?",
"Are you sure to delete this tag?": "Möchtest du dieses Tag wirklich löschen?",
"Audio Layout": "Audiolayout",
"Back": "Zurück",
"Cancel": "Abbrechen",
"Cannot add another stream with disposition flag 'default' or 'forced' set": "Es kann kein weiterer Stream mit gesetztem Dispositions-Flag 'default' oder 'forced' hinzugefügt werden",
"Changes applied and file reloaded.": "Änderungen angewendet und Datei neu geladen.",
"Cleanup": "Bereinigen",
"Cleanup disabled.": "Bereinigung deaktiviert.",
"Cleanup enabled.": "Bereinigung aktiviert.",
"Codec": "Codec",
"Continuing edit session.": "Bearbeitung wird fortgesetzt.",
"Default": "Standard",
"Delete": "Löschen",
"Delete Show": "Serie löschen",
"Deleted media tag {tag!r}.": "Medien-Tag {tag!r} gelöscht.",
"Differences": "Unterschiede",
"Differences (file->db/output)": "Unterschiede (Datei->DB/Ausgabe)",
"Discard": "Verwerfen",
"Discard pending metadata changes and quit?": "Ausstehende Metadatenänderungen verwerfen und beenden?",
"Discard pending metadata changes and reload the file state?": "Ausstehende Metadatenänderungen verwerfen und Dateistand neu laden?",
"Down": "Runter",
"Dry-run: would rewrite via temporary file {target_path}": "Trockenlauf: würde über temporäre Datei {target_path} neu schreiben",
"Edit": "Bearbeiten",
"Edit Pattern": "Muster bearbeiten",
"Edit Show": "Serie bearbeiten",
"Edit filename pattern": "Dateinamensmuster bearbeiten",
"Edit shifted season": "Verschobene Staffel bearbeiten",
"Edit stream": "Stream bearbeiten",
"Episode Offset": "Episodenoffset",
"Episode offset": "Episodenoffset",
"File": "Datei",
"File patterns": "Datei-Namensmuster",
"First Episode": "Erste Episode",
"First episode": "Erste Episode",
"Forced": "Erzwungen",
"Help": "Hilfe",
"Help Screen": "Hilfe-Bildschirm",
"ID": "ID",
"Identify": "Identifizieren",
"Index": "Index",
"Index / Subindex": "Index / Unterindex",
"Index Episode Digits": "Ep. Index Stellen",
"Index Season Digits": "Sta. Index Stellen",
"Indicator Edisode Digits": "Ep. Indikator Stellen",
"Indicator Season Digits": "Sta. Indikator Stellen",
"Keep Editing": "Weiter bearbeiten",
"Keeping pending changes.": "Ausstehende Änderungen bleiben erhalten.",
"Key": "Schlüssel",
"Language": "Sprache",
"Last Episode": "Letzte Episode",
"Last episode": "Letzte Episode",
"Layout": "Layout",
"Media Tags": "Medien-Tags",
"More than one default audio stream detected and no prompt set": "Mehr als ein Standard-Audiostream erkannt und keine Abfrage aktiviert",
"More than one default audio stream detected! Please select stream": "Mehr als ein Standard-Audiostream erkannt! Bitte Stream auswählen",
"More than one default subtitle stream detected and no prompt set": "Mehr als ein Standard-Untertitelstream erkannt und keine Abfrage aktiviert",
"More than one default subtitle stream detected! Please select stream": "Mehr als ein Standard-Untertitelstream erkannt! Bitte Stream auswählen",
"More than one default video stream detected and no prompt set": "Mehr als ein Standard-Videostream erkannt und keine Abfrage aktiviert",
"More than one default video stream detected! Please select stream": "Mehr als ein Standard-Videostream erkannt! Bitte Stream auswählen",
"More than one forced audio stream detected and no prompt set": "Mehr als ein erzwungener Audiostream erkannt und keine Abfrage aktiviert",
"More than one forced audio stream detected! Please select stream": "Mehr als ein erzwungener Audiostream erkannt! Bitte Stream auswählen",
"More than one forced subtitle stream detected and no prompt set": "Mehr als ein erzwungener Untertitelstream erkannt und keine Abfrage aktiviert",
"More than one forced subtitle stream detected! Please select stream": "Mehr als ein erzwungener Untertitelstream erkannt! Bitte Stream auswählen",
"More than one forced video stream detected and no prompt set": "Mehr als ein erzwungener Videostream erkannt und keine Abfrage aktiviert",
"More than one forced video stream detected! Please select stream": "Mehr als ein erzwungener Videostream erkannt! Bitte Stream auswählen",
"Name": "Name",
"New Pattern": "Neues Muster",
"New Show": "Neue Serie",
"New filename pattern": "Neues Dateinamensmuster",
"New shifted season": "Neue verschobene Staffel",
"New stream": "Neuer Stream",
"No": "Nein",
"No changes to apply.": "Keine Änderungen zum Anwenden.",
"No changes to revert.": "Keine Änderungen zum Zurücksetzen.",
"Normalization disabled.": "Normalisierung deaktiviert.",
"Normalization enabled.": "Normalisierung aktiviert.",
"Normalize": "Normalisieren",
"Notes": "Notizen",
"Pattern": "Muster",
"Planned Changes (file->edited output)": "Geplante Änderungen (Datei->bearbeitete Ausgabe)",
"Quality": "Qualität",
"Quit": "Beenden",
"Remove Pattern": "Muster entfernen",
"Revert": "Zurücksetzen",
"Reverted pending changes.": "Ausstehende Änderungen verworfen.",
"Save": "Speichern",
"Season Offset": "Staffeloffset",
"Select a stream first.": "Bitte zuerst einen Stream auswählen.",
"Set Default": "Als Standard setzen",
"Set Forced": "Als erzwungen setzen",
"Settings Screen": "Einstellungsbildschirm",
"Numbering Mapping": "Abbildung Nummerierung",
"Show": "Serie",
"Shows": "Serien",
"Source Season": "Quellstaffel",
"SrcIndex": "QuellIndex",
"Status": "Status",
"Stay": "Bleiben",
"Stream dispositions": "Stream-Dispositionen",
"Stream tags": "Stream-Tags",
"Streams": "Streams",
"SubIndex": "Unterindex",
"Substitute": "Ersetzen",
"Substitute pattern": "Muster ersetzen",
"Title": "Titel",
"Type": "Typ",
"Unable to update selected stream.": "Ausgewählten Stream konnte nicht aktualisiert werden.",
"Up": "Hoch",
"Update Pattern": "Muster aktualisieren",
"Updated media tag {tag!r}.": "Medien-Tag {tag!r} aktualisiert.",
"Updated stream #{index} ({track_type}).": "Stream #{index} ({track_type}) aktualisiert.",
"Value": "Wert",
"Year": "Jahr",
"Yes": "Ja",
"add media tag: key='{key}' value='{value}'": "Medien-Tag hinzufügen: Schlüssel='{key}' Wert='{value}'",
"add {track_type} track: index={index} lang={language}": "{track_type}-Stream hinzufügen: Index={index} Sprache={language}",
"attached_pic": "attached_pic",
"attachment": "Anhang",
"audio": "Audio",
"captions": "Untertitel",
"change media tag: key='{key}' value='{value}'": "Medien-Tag ändern: Schlüssel='{key}' Wert='{value}'",
"change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}": "Stream #{index} ({track_type}:{sub_index}) Disposition hinzufügen={disposition}",
"change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}": "Stream #{index} ({track_type}:{sub_index}) Schlüssel hinzufügen={key} Wert={value}",
"change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}": "Stream #{index} ({track_type}:{sub_index}) Schlüssel ändern={key} Wert={value}",
"change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}": "Stream #{index} ({track_type}:{sub_index}) Disposition entfernen={disposition}",
"change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}": "Stream #{index} ({track_type}:{sub_index}) Schlüssel entfernen={key} Wert={value}",
"clean_effects": "Nur Effekte",
"comment": "Kommentar",
"default": "Standard",
"dependent": "abhängig",
"descriptions": "Beschreibungen",
"dub": "Synchronisiert",
"for pattern": "für Muster",
"forced": "erzwungen",
"from": "von",
"from pattern": "aus Muster",
"from show": "aus Serie",
"hearing_impaired": "hörgeschädigt",
"karaoke": "Karaoke",
"lyrics": "Liedtext",
"metadata": "Metadaten",
"non_diegetic": "nicht-diegetisch",
"original": "Original",
"pattern #{id}": "Muster #{id}",
"remove media tag: key='{key}' value='{value}'": "Medien-Tag entfernen: Schlüssel='{key}' Wert='{value}'",
"remove stream #{index}": "Stream #{index} entfernen",
"show #{id}": "Serie #{id}",
"stereo": "Stereo",
"still_image": "Standbild",
"sub index": "Unterindex",
"subtitle": "Untertitel",
"timed_thumbnails": "zeitgesteuerte Vorschaubilder",
"undefined": "undefiniert",
"unknown": "unbekannt",
"video": "Video",
"visual_impaired": "sehgeschädigt"
}
}

360
assets/i18n/en.json Normal file
View File

@@ -0,0 +1,360 @@
{
"iso_languages": {
"ABKHAZIAN": "Abkhazian",
"AFAR": "Afar",
"AFRIKAANS": "Afrikaans",
"AKAN": "Akan",
"ALBANIAN": "Albanian",
"AMHARIC": "Amharic",
"ARABIC": "Arabic",
"ARAGONESE": "Aragonese",
"ARMENIAN": "Armenian",
"ASSAMESE": "Assamese",
"AVARIC": "Avaric",
"AVESTAN": "Avestan",
"AYMARA": "Aymara",
"AZERBAIJANI": "Azerbaijani",
"BAMBARA": "Bambara",
"BASHKIR": "Bashkir",
"BASQUE": "Basque",
"BELARUSIAN": "Belarusian",
"BENGALI": "Bengali",
"BISLAMA": "Bislama",
"BOKMAL": "Bokmål",
"BOSNIAN": "Bosnian",
"BRETON": "Breton",
"BULGARIAN": "Bulgarian",
"BURMESE": "Burmese",
"CATALAN": "Catalan",
"CHAMORRO": "Chamorro",
"CHECHEN": "Chechen",
"CHICHEWA": "Chichewa",
"CHINESE": "Chinese",
"CHURCH_SLAVIC": "Church Slavic",
"CHUVASH": "Chuvash",
"CORNISH": "Cornish",
"CORSICAN": "Corsican",
"CREE": "Cree",
"CROATIAN": "Croatian",
"CZECH": "Czech",
"DANISH": "Danish",
"DIVEHI": "Divehi",
"DUTCH": "Dutch",
"DZONGKHA": "Dzongkha",
"ENGLISH": "English",
"ESPERANTO": "Esperanto",
"ESTONIAN": "Estonian",
"EWE": "Ewe",
"FAROESE": "Faroese",
"FIJIAN": "Fijian",
"FILIPINO": "Filipino",
"FINNISH": "Finnish",
"FRENCH": "French",
"FULAH": "Fulah",
"GALICIAN": "Galician",
"GANDA": "Ganda",
"GEORGIAN": "Georgian",
"GERMAN": "German",
"GREEK": "Greek",
"GUARANI": "Guarani",
"GUJARATI": "Gujarati",
"HAITIAN": "Haitian",
"HAUSA": "Hausa",
"HEBREW": "Hebrew",
"HERERO": "Herero",
"HINDI": "Hindi",
"HIRI_MOTU": "Hiri Motu",
"HUNGARIAN": "Hungarian",
"ICELANDIC": "Icelandic",
"IDO": "Ido",
"IGBO": "Igbo",
"INDONESIAN": "Indonesian",
"INTERLINGUA": "Interlingua",
"INTERLINGUE": "Interlingue",
"INUKTITUT": "Inuktitut",
"INUPIAQ": "Inupiaq",
"IRISH": "Irish",
"ITALIAN": "Italian",
"JAPANESE": "Japanese",
"JAVANESE": "Javanese",
"KALAALLISUT": "Kalaallisut",
"KANNADA": "Kannada",
"KANURI": "Kanuri",
"KASHMIRI": "Kashmiri",
"KAZAKH": "Kazakh",
"KHMER": "Khmer",
"KIKUYU": "Kikuyu",
"KINYARWANDA": "Kinyarwanda",
"KIRGHIZ": "Kirghiz",
"KOMI": "Komi",
"KONGO": "Kongo",
"KOREAN": "Korean",
"KUANYAMA": "Kuanyama",
"KURDISH": "Kurdish",
"LAO": "Lao",
"LATIN": "Latin",
"LATVIAN": "Latvian",
"LIMBURGAN": "Limburgan",
"LINGALA": "Lingala",
"LITHUANIAN": "Lithuanian",
"LUBA_KATANGA": "Luba-Katanga",
"LUXEMBOURGISH": "Luxembourgish",
"MACEDONIAN": "Macedonian",
"MALAGASY": "Malagasy",
"MALAY": "Malay",
"MALAYALAM": "Malayalam",
"MALTESE": "Maltese",
"MANX": "Manx",
"MAORI": "Maori",
"MARATHI": "Marathi",
"MARSHALLESE": "Marshallese",
"MONGOLIAN": "Mongolian",
"NAURU": "Nauru",
"NAVAJO": "Navajo",
"NDONGA": "Ndonga",
"NEPALI": "Nepali",
"NORTHERN_SAMI": "Northern Sami",
"NORTH_NDEBELE": "North Ndebele",
"NORWEGIAN": "Norwegian",
"NORWEGIAN_NYNORSK": "Nynorsk",
"OCCITAN": "Occitan",
"OJIBWA": "Ojibwa",
"ORIYA": "Oriya",
"OROMO": "Oromo",
"OSSETIAN": "Ossetian",
"PALI": "Pali",
"PANJABI": "Panjabi",
"PERSIAN": "Persian",
"POLISH": "Polish",
"PORTUGUESE": "Portuguese",
"PUSHTO": "Pushto",
"QUECHUA": "Quechua",
"ROMANIAN": "Romanian",
"ROMANSH": "Romansh",
"RUNDI": "Rundi",
"RUSSIAN": "Russian",
"SAMOAN": "Samoan",
"SANGO": "Sango",
"SANSKRIT": "Sanskrit",
"SARDINIAN": "Sardinian",
"SCOTTISH_GAELIC": "Scottish Gaelic",
"SERBIAN": "Serbian",
"SHONA": "Shona",
"SICHUAN_YI": "Sichuan Yi",
"SINDHI": "Sindhi",
"SINHALA": "Sinhala",
"SLOVAK": "Slovak",
"SLOVENIAN": "Slovenian",
"SOMALI": "Somali",
"SOUTHERN_SOTHO": "Southern Sotho",
"SOUTH_NDEBELE": "South Ndebele",
"SPANISH": "Spanish",
"SUNDANESE": "Sundanese",
"SWAHILI": "Swahili",
"SWATI": "Swati",
"SWEDISH": "Swedish",
"TAGALOG": "Tagalog",
"TAHITIAN": "Tahitian",
"TAJIK": "Tajik",
"TAMIL": "Tamil",
"TATAR": "Tatar",
"TELUGU": "Telugu",
"THAI": "Thai",
"TIBETAN": "Tibetan",
"TIGRINYA": "Tigrinya",
"TONGA": "Tonga",
"TSONGA": "Tsonga",
"TSWANA": "Tswana",
"TURKISH": "Turkish",
"TURKMEN": "Turkmen",
"TWI": "Twi",
"UIGHUR": "Uighur",
"UKRAINIAN": "Ukrainian",
"UNDEFINED": "undefined",
"URDU": "Urdu",
"UZBEK": "Uzbek",
"VENDA": "Venda",
"VIETNAMESE": "Vietnamese",
"VOLAPUK": "Volapük",
"WALLOON": "Walloon",
"WELSH": "Welsh",
"WESTERN_FRISIAN": "Western Frisian",
"WOLOF": "Wolof",
"XHOSA": "Xhosa",
"YIDDISH": "Yiddish",
"YORUBA": "Yoruba",
"ZHUANG": "Zhuang",
"ZULU": "Zulu"
},
"phrases": {
"5.0(side)": "5.0(side)",
"5.1(side)": "5.1(side)",
"6.1": "6.1",
"6ch": "6ch",
"7.1": "7.1",
"<New show>": "<New show>",
"Add": "Add",
"Add Pattern": "Add Pattern",
"Apply": "Apply",
"Apply failed: {error}": "Apply failed: {error}",
"Are you sure to delete the following filename pattern?": "Are you sure to delete the following filename pattern?",
"Are you sure to delete the following shifted season?": "Are you sure to delete the following shifted season?",
"Are you sure to delete the following show?": "Are you sure to delete the following show?",
"Are you sure to delete the following {track_type} track?": "Are you sure to delete the following {track_type} track?",
"Are you sure to delete this tag?": "Are you sure to delete this tag?",
"Audio Layout": "Audio Layout",
"Back": "Back",
"Cancel": "Cancel",
"Cannot add another stream with disposition flag 'default' or 'forced' set": "Cannot add another stream with disposition flag 'default' or 'forced' set",
"Changes applied and file reloaded.": "Changes applied and file reloaded.",
"Cleanup": "Cleanup",
"Cleanup disabled.": "Cleanup disabled.",
"Cleanup enabled.": "Cleanup enabled.",
"Codec": "Codec",
"Continuing edit session.": "Continuing edit session.",
"Default": "Default",
"Delete": "Delete",
"Delete Show": "Delete Show",
"Deleted media tag {tag!r}.": "Deleted media tag {tag!r}.",
"Differences": "Differences",
"Differences (file->db/output)": "Differences (file->db/output)",
"Discard": "Discard",
"Discard pending metadata changes and quit?": "Discard pending metadata changes and quit?",
"Discard pending metadata changes and reload the file state?": "Discard pending metadata changes and reload the file state?",
"Down": "Down",
"Dry-run: would rewrite via temporary file {target_path}": "Dry-run: would rewrite via temporary file {target_path}",
"Edit": "Edit",
"Edit Pattern": "Edit Pattern",
"Edit Show": "Edit Show",
"Edit filename pattern": "Edit filename pattern",
"Edit shifted season": "Edit shifted season",
"Edit stream": "Edit stream",
"Episode Offset": "Episode Offset",
"Episode offset": "Episode offset",
"File": "File",
"File patterns": "File patterns",
"First Episode": "First Episode",
"First episode": "First episode",
"Forced": "Forced",
"Help": "Help",
"Help Screen": "Help Screen",
"ID": "ID",
"Identify": "Identify",
"Index": "Index",
"Index / Subindex": "Index / Subindex",
"Index Episode Digits": "Index Episode Digits",
"Index Season Digits": "Index Season Digits",
"Indicator Edisode Digits": "Indicator Edisode Digits",
"Indicator Season Digits": "Indicator Season Digits",
"Keep Editing": "Keep Editing",
"Keeping pending changes.": "Keeping pending changes.",
"Key": "Key",
"Language": "Language",
"Last Episode": "Last Episode",
"Last episode": "Last episode",
"Layout": "Layout",
"Media Tags": "Media Tags",
"More than one default audio stream detected and no prompt set": "More than one default audio stream detected and no prompt set",
"More than one default audio stream detected! Please select stream": "More than one default audio stream detected! Please select stream",
"More than one default subtitle stream detected and no prompt set": "More than one default subtitle stream detected and no prompt set",
"More than one default subtitle stream detected! Please select stream": "More than one default subtitle stream detected! Please select stream",
"More than one default video stream detected and no prompt set": "More than one default video stream detected and no prompt set",
"More than one default video stream detected! Please select stream": "More than one default video stream detected! Please select stream",
"More than one forced audio stream detected and no prompt set": "More than one forced audio stream detected and no prompt set",
"More than one forced audio stream detected! Please select stream": "More than one forced audio stream detected! Please select stream",
"More than one forced subtitle stream detected and no prompt set": "More than one forced subtitle stream detected and no prompt set",
"More than one forced subtitle stream detected! Please select stream": "More than one forced subtitle stream detected! Please select stream",
"More than one forced video stream detected and no prompt set": "More than one forced video stream detected and no prompt set",
"More than one forced video stream detected! Please select stream": "More than one forced video stream detected! Please select stream",
"Name": "Name",
"New Pattern": "New Pattern",
"New Show": "New Show",
"New filename pattern": "New filename pattern",
"New shifted season": "New shifted season",
"New stream": "New stream",
"No": "No",
"No changes to apply.": "No changes to apply.",
"No changes to revert.": "No changes to revert.",
"Normalization disabled.": "Normalization disabled.",
"Normalization enabled.": "Normalization enabled.",
"Normalize": "Normalize",
"Notes": "Notes",
"Pattern": "Pattern",
"Planned Changes (file->edited output)": "Planned Changes (file->edited output)",
"Quality": "Quality",
"Quit": "Quit",
"Remove Pattern": "Remove Pattern",
"Revert": "Revert",
"Reverted pending changes.": "Reverted pending changes.",
"Save": "Save",
"Season Offset": "Season Offset",
"Select a stream first.": "Select a stream first.",
"Set Default": "Set Default",
"Set Forced": "Set Forced",
"Settings Screen": "Settings Screen",
"Numbering Mapping": "Numbering Mapping",
"Show": "Show",
"Shows": "Shows",
"SrcIndex": "SrcIndex",
"Status": "Status",
"Stay": "Stay",
"Stream dispositions": "Stream dispositions",
"Stream tags": "Stream tags",
"Streams": "Streams",
"SubIndex": "SubIndex",
"Substitute": "Substitute",
"Substitute pattern": "Substitute pattern",
"Title": "Title",
"Type": "Type",
"Unable to update selected stream.": "Unable to update selected stream.",
"Up": "Up",
"Update Pattern": "Update Pattern",
"Updated media tag {tag!r}.": "Updated media tag {tag!r}.",
"Updated stream #{index} ({track_type}).": "Updated stream #{index} ({track_type}).",
"Value": "Value",
"Year": "Year",
"Yes": "Yes",
"add media tag: key='{key}' value='{value}'": "add media tag: key='{key}' value='{value}'",
"add {track_type} track: index={index} lang={language}": "add {track_type} track: index={index} lang={language}",
"attached_pic": "attached_pic",
"attachment": "attachment",
"audio": "audio",
"captions": "captions",
"change media tag: key='{key}' value='{value}'": "change media tag: key='{key}' value='{value}'",
"change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}": "change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}",
"change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}": "change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}",
"change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}": "change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}",
"change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}": "change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}",
"change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}": "change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}",
"clean_effects": "clean_effects",
"comment": "comment",
"default": "default",
"dependent": "dependent",
"descriptions": "descriptions",
"dub": "dub",
"for pattern": "for pattern",
"forced": "forced",
"from": "from",
"from pattern": "from pattern",
"from show": "from show",
"hearing_impaired": "hearing_impaired",
"karaoke": "karaoke",
"lyrics": "lyrics",
"metadata": "metadata",
"non_diegetic": "non_diegetic",
"original": "original",
"pattern #{id}": "pattern #{id}",
"remove media tag: key='{key}' value='{value}'": "remove media tag: key='{key}' value='{value}'",
"remove stream #{index}": "remove stream #{index}",
"show #{id}": "show #{id}",
"stereo": "stereo",
"still_image": "still_image",
"sub index": "sub index",
"subtitle": "subtitle",
"timed_thumbnails": "timed_thumbnails",
"undefined": "undefined",
"unknown": "unknown",
"video": "video",
"visual_impaired": "visual_impaired"
}
}

361
assets/i18n/eo.json Normal file
View File

@@ -0,0 +1,361 @@
{
"iso_languages": {
"ABKHAZIAN": "Abĥaza",
"AFAR": "Afara",
"AFRIKAANS": "Afrikansa",
"AKAN": "Akana",
"ALBANIAN": "Albana",
"AMHARIC": "Amhara",
"ARABIC": "Araba",
"ARAGONESE": "Aragona",
"ARMENIAN": "Armena",
"ASSAMESE": "Asama",
"AVARIC": "Avara",
"AVESTAN": "Avesta",
"AYMARA": "Ajmara",
"AZERBAIJANI": "Azerbajĝana",
"BAMBARA": "Bambara",
"BASHKIR": "Baŝkira",
"BASQUE": "Eŭska",
"BELARUSIAN": "Belorusa",
"BENGALI": "Bengala",
"BISLAMA": "Bislamo",
"BOKMAL": "Bokmål",
"BOSNIAN": "Bosna",
"BRETON": "Bretona",
"BULGARIAN": "Bulgara",
"BURMESE": "Birma",
"CATALAN": "Catalan",
"CHAMORRO": "Ĉamora",
"CHECHEN": "Ĉeĉena",
"CHICHEWA": "Chichewa",
"CHINESE": "Ĉina",
"CHURCH_SLAVIC": "Church Slavic",
"CHUVASH": "Ĉuvaŝa",
"CORNISH": "Kornvala",
"CORSICAN": "Korsika",
"CREE": "Kria",
"CROATIAN": "Kroata",
"CZECH": "Ĉeĥa",
"DANISH": "Dana",
"DIVEHI": "Divehi",
"DUTCH": "Dutch",
"DZONGKHA": "Dzonka",
"ENGLISH": "Angla",
"ESPERANTO": "Esperanto",
"ESTONIAN": "Estona",
"EWE": "Evea",
"FAROESE": "Feroa",
"FIJIAN": "Fiĝia",
"FILIPINO": "Filipino",
"FINNISH": "Finna",
"FRENCH": "Franca",
"FULAH": "Fula",
"GALICIAN": "Galega",
"GANDA": "Ganda",
"GEORGIAN": "Kartvela",
"GERMAN": "Germana",
"GREEK": "Greek",
"GUARANI": "Gvarania",
"GUJARATI": "Guĝarata",
"HAITIAN": "Haitian",
"HAUSA": "Haŭsa",
"HEBREW": "Hebrea",
"HERERO": "Herera",
"HINDI": "Hindia",
"HIRI_MOTU": "Hirimotua",
"HUNGARIAN": "Hungara",
"ICELANDIC": "Islanda",
"IDO": "Ido",
"IGBO": "Igba",
"INDONESIAN": "Indonezia",
"INTERLINGUA": "Interlingua",
"INTERLINGUE": "Interlingue",
"INUKTITUT": "Inuktituta",
"INUPIAQ": "Inupiaka",
"IRISH": "Irlanda",
"ITALIAN": "Itala",
"JAPANESE": "Japana",
"JAVANESE": "Java",
"KALAALLISUT": "Kalaallisut",
"KANNADA": "Kanara",
"KANURI": "Kanura",
"KASHMIRI": "Kaŝmira",
"KAZAKH": "Kazaĥa",
"KHMER": "Khmer",
"KIKUYU": "Kikuyu",
"KINYARWANDA": "Ruanda",
"KIRGHIZ": "Kirghiz",
"KOMI": "Komia",
"KONGO": "Konga",
"KOREAN": "Korea",
"KUANYAMA": "Kuanyama",
"KURDISH": "Kurda",
"LAO": "Laosa",
"LATIN": "Latina",
"LATVIAN": "Latva",
"LIMBURGAN": "Limburgan",
"LINGALA": "Lingala",
"LITHUANIAN": "Litova",
"LUBA_KATANGA": "Luba-katanga",
"LUXEMBOURGISH": "Luxembourgish",
"MACEDONIAN": "Makedona",
"MALAGASY": "Malagasa",
"MALAY": "Malaja",
"MALAYALAM": "Malajala",
"MALTESE": "Malta",
"MANX": "Manksa",
"MAORI": "Maoria",
"MARATHI": "Marata",
"MARSHALLESE": "Marŝala",
"MONGOLIAN": "Mongola",
"NAURU": "Naura",
"NAVAJO": "Navajo",
"NDONGA": "Ndonga",
"NEPALI": "Nepala",
"NORTHERN_SAMI": "Norda samea",
"NORTH_NDEBELE": "North Ndebele",
"NORWEGIAN": "Norvega",
"NORWEGIAN_NYNORSK": "Nynorsk",
"OCCITAN": "Occitan",
"OJIBWA": "Oĝibva",
"ORIYA": "Orija",
"OROMO": "Oroma",
"OSSETIAN": "Ossetian",
"PALI": "Palia",
"PANJABI": "Panjabi",
"PERSIAN": "Persa",
"POLISH": "Pola",
"PORTUGUESE": "Portugala",
"PUSHTO": "Pushto",
"QUECHUA": "Keĉua",
"ROMANIAN": "Romanian",
"ROMANSH": "Romanĉa",
"RUNDI": "Burunda",
"RUSSIAN": "Rusa",
"SAMOAN": "Samoa",
"SANGO": "Sangoa",
"SANSKRIT": "Sanskrito",
"SARDINIAN": "Sarda",
"SCOTTISH_GAELIC": "Scottish Gaelic",
"SERBIAN": "Serba",
"SHONA": "Ŝona",
"SICHUAN_YI": "Sichuan Yi",
"SINDHI": "Sinda",
"SINHALA": "Sinhala",
"SLOVAK": "Slovaka",
"SLOVENIAN": "Slovena",
"SOMALI": "Somalia",
"SOUTHERN_SOTHO": "Southern Sotho",
"SOUTH_NDEBELE": "South Ndebele",
"SPANISH": "Spanish",
"SUNDANESE": "Sunda",
"SWAHILI": "Svahila",
"SWATI": "Svazia",
"SWEDISH": "Sveda",
"TAGALOG": "Tagaloga",
"TAHITIAN": "Tahitia",
"TAJIK": "Taĝika",
"TAMIL": "Tamila",
"TATAR": "Tatara",
"TELUGU": "Telugua",
"THAI": "Taja",
"TIBETAN": "Tibeta",
"TIGRINYA": "Tigraja",
"TONGA": "Tonga",
"TSONGA": "Conga",
"TSWANA": "Cvana",
"TURKISH": "Turka",
"TURKMEN": "Turkmena",
"TWI": "Tvia",
"UIGHUR": "Uighur",
"UKRAINIAN": "Ukraina",
"UNDEFINED": "undefined",
"URDU": "Urdua",
"UZBEK": "Uzbeka",
"VENDA": "Vendaa",
"VIETNAMESE": "Vjetnama",
"VOLAPUK": "Volapuko",
"WALLOON": "Valona",
"WELSH": "Kimra",
"WESTERN_FRISIAN": "Okcidenta frisa",
"WOLOF": "Volofa",
"XHOSA": "Kosa",
"YIDDISH": "Jida",
"YORUBA": "Joruba",
"ZHUANG": "Zhuang",
"ZULU": "Zulua"
},
"phrases": {
"5.0(side)": "5.0(side)",
"5.1(side)": "5.1(side)",
"6.1": "6.1",
"6ch": "6ch",
"7.1": "7.1",
"<New show>": "<Nova serio>",
"Add": "Aldoni",
"Add Pattern": "Aldoni ŝablonon",
"Apply": "Apliki",
"Apply failed: {error}": "Apliko malsukcesis: {error}",
"Are you sure to delete the following filename pattern?": "Ĉu vi certe volas forigi la jenan dosiernoman ŝablonon?",
"Are you sure to delete the following shifted season?": "Ĉu vi certe volas forigi la jenan ŝovitan sezonon?",
"Are you sure to delete the following show?": "Ĉu vi certe volas forigi la jenan serion?",
"Are you sure to delete the following {track_type} track?": "Ĉu vi certe volas forigi la jenan {track_type}-trakon?",
"Are you sure to delete this tag?": "Ĉu vi certe volas forigi ĉi tiun etikedon?",
"Audio Layout": "Aŭda aranĝo",
"Back": "Reen",
"Cancel": "Nuligi",
"Cannot add another stream with disposition flag 'default' or 'forced' set": "Ne eblas aldoni alian fluon kun la dispozicia flago 'default' aŭ 'forced' aktiva",
"Changes applied and file reloaded.": "Ŝanĝoj aplikitaj kaj dosiero reŝargita.",
"Cleanup": "Purigado",
"Cleanup disabled.": "Purigado malŝaltita.",
"Cleanup enabled.": "Purigado ŝaltita.",
"Codec": "Kodeko",
"Continuing edit session.": "Daŭrigante la redaktan seancon.",
"Default": "Defaŭlta",
"Delete": "Forigi",
"Delete Show": "Forigi serion",
"Deleted media tag {tag!r}.": "Forigis la aŭdvidan etikedon {tag!r}.",
"Differences": "Diferencoj",
"Differences (file->db/output)": "Diferencoj (dosiero->DB/eligo)",
"Discard": "Forĵeti",
"Discard pending metadata changes and quit?": "Ĉu forĵeti atendatajn metadatumajn ŝanĝojn kaj eliri?",
"Discard pending metadata changes and reload the file state?": "Ĉu forĵeti atendatajn metadatumajn ŝanĝojn kaj reŝargi la dosieran staton?",
"Down": "Malsupren",
"Dry-run: would rewrite via temporary file {target_path}": "Seka provo: reskribus per provizora dosiero {target_path}",
"Edit": "Redakti",
"Edit Pattern": "Redakti ŝablonon",
"Edit Show": "Redakti serion",
"Edit filename pattern": "Redakti dosiernoman ŝablonon",
"Edit shifted season": "Redakti ŝovitan sezonon",
"Edit stream": "Redakti fluon",
"Episode Offset": "Epizoda deŝovo",
"Episode offset": "Epizoda deŝovo",
"File": "Dosiero",
"File patterns": "Dosieraj ŝablonoj",
"First Episode": "Unua epizodo",
"First episode": "Unua epizodo",
"Forced": "Devigita",
"Help": "Helpo",
"Help Screen": "Helpa ekrano",
"ID": "ID",
"Identify": "Identigi",
"Index": "Indekso",
"Index / Subindex": "Indekso / Subindekso",
"Index Episode Digits": "Ciferoj de epizoda indekso",
"Index Season Digits": "Ciferoj de sezona indekso",
"Indicator Edisode Digits": "Ciferoj de epizoda indikilo",
"Indicator Season Digits": "Ciferoj de sezona indikilo",
"Keep Editing": "Daŭrigi redaktadon",
"Keeping pending changes.": "Konservas atendatajn ŝanĝojn.",
"Key": "Ŝlosilo",
"Language": "Lingvo",
"Last Episode": "Lasta epizodo",
"Last episode": "Lasta epizodo",
"Layout": "Aranĝo",
"Media Tags": "Aŭdvidaj etikedoj",
"More than one default audio stream detected and no prompt set": "Pli ol unu defaŭlta sonfluo detektita kaj neniu instigo agordita",
"More than one default audio stream detected! Please select stream": "Pli ol unu defaŭlta sonfluo detektita! Bonvolu elekti fluon",
"More than one default subtitle stream detected and no prompt set": "Pli ol unu defaŭlta subtitola fluo detektita kaj neniu instigo agordita",
"More than one default subtitle stream detected! Please select stream": "Pli ol unu defaŭlta subtitola fluo detektita! Bonvolu elekti fluon",
"More than one default video stream detected and no prompt set": "Pli ol unu defaŭlta videofluo detektita kaj neniu instigo agordita",
"More than one default video stream detected! Please select stream": "Pli ol unu defaŭlta videofluo detektita! Bonvolu elekti fluon",
"More than one forced audio stream detected and no prompt set": "Pli ol unu devigita sonfluo detektita kaj neniu instigo agordita",
"More than one forced audio stream detected! Please select stream": "Pli ol unu devigita sonfluo detektita! Bonvolu elekti fluon",
"More than one forced subtitle stream detected and no prompt set": "Pli ol unu devigita subtitola fluo detektita kaj neniu instigo agordita",
"More than one forced subtitle stream detected! Please select stream": "Pli ol unu devigita subtitola fluo detektita! Bonvolu elekti fluon",
"More than one forced video stream detected and no prompt set": "Pli ol unu devigita videofluo detektita kaj neniu instigo agordita",
"More than one forced video stream detected! Please select stream": "Pli ol unu devigita videofluo detektita! Bonvolu elekti fluon",
"Name": "Nomo",
"New Pattern": "Nova ŝablono",
"New Show": "Nova serio",
"New filename pattern": "Nova dosiernoma ŝablono",
"New shifted season": "Nova ŝovita sezono",
"New stream": "Nova fluo",
"No": "Ne",
"No changes to apply.": "Neniuj ŝanĝoj por apliki.",
"No changes to revert.": "Neniuj ŝanĝoj por malfari.",
"Normalization disabled.": "Normaligo malŝaltita.",
"Normalization enabled.": "Normaligo ŝaltita.",
"Normalize": "Normaligi",
"Notes": "Notoj",
"Pattern": "Ŝablono",
"Planned Changes (file->edited output)": "Planitaj ŝanĝoj (dosiero->redaktita eligo)",
"Quality": "Kvalito",
"Quit": "Eliri",
"Remove Pattern": "Forigi ŝablonon",
"Revert": "Malfari",
"Reverted pending changes.": "Malfaris atendatajn ŝanĝojn.",
"Save": "Konservi",
"Season Offset": "Sezona deŝovo",
"Select a stream first.": "Bonvolu unue elekti fluon.",
"Set Default": "Agordi kiel defaŭltan",
"Set Forced": "Agordi kiel devigitan",
"Settings Screen": "Agorda ekrano",
"Numbering Mapping": "Ŝovitaj sezonoj",
"Show": "Serio",
"Shows": "Serioj",
"Source Season": "Fonta sezono",
"SrcIndex": "Fontindekso",
"Status": "Stato",
"Stay": "Resti",
"Stream dispositions": "Fluaj dispozicioj",
"Stream tags": "Fluaj etikedoj",
"Streams": "Fluoj",
"SubIndex": "Subindekso",
"Substitute": "Anstataŭigi",
"Substitute pattern": "Anstataŭigi ŝablonon",
"Title": "Titolo",
"Type": "Tipo",
"Unable to update selected stream.": "Ne eblis ĝisdatigi la elektitan fluon.",
"Up": "Supren",
"Update Pattern": "Ĝisdatigi ŝablonon",
"Updated media tag {tag!r}.": "Ĝisdatigis la aŭdvidan etikedon {tag!r}.",
"Updated stream #{index} ({track_type}).": "Ĝisdatigis fluon #{index} ({track_type}).",
"Value": "Valoro",
"Year": "Jaro",
"Yes": "Jes",
"add media tag: key='{key}' value='{value}'": "aldoni aŭdvidan etikedon: ŝlosilo='{key}' valoro='{value}'",
"add {track_type} track: index={index} lang={language}": "aldoni {track_type}-trakon: indekso={index} lingvo={language}",
"attached_pic": "attached_pic",
"attachment": "aldonaĵo",
"audio": "sono",
"captions": "subtekstoj",
"change media tag: key='{key}' value='{value}'": "ŝanĝi aŭdvidan etikedon: ŝlosilo='{key}' valoro='{value}'",
"change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}": "ŝanĝi fluon #{index} ({track_type}:{sub_index}) aldoni dispozicion={disposition}",
"change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}": "ŝanĝi fluon #{index} ({track_type}:{sub_index}) aldoni ŝlosilon={key} valoron={value}",
"change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}": "ŝanĝi fluon #{index} ({track_type}:{sub_index}) ŝanĝi ŝlosilon={key} valoron={value}",
"change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}": "ŝanĝi fluon #{index} ({track_type}:{sub_index}) forigi dispozicion={disposition}",
"change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}": "ŝanĝi fluon #{index} ({track_type}:{sub_index}) forigi ŝlosilon={key} valoron={value}",
"clean_effects": "nur efektoj",
"comment": "komento",
"default": "defaŭlta",
"dependent": "dependa",
"descriptions": "priskriboj",
"dub": "dublado",
"for pattern": "por ŝablono",
"forced": "devigita",
"from": "de",
"from pattern": "de ŝablono",
"from show": "el serio",
"hearing_impaired": "aŭdmalhelpita",
"karaoke": "karaokeo",
"lyrics": "kantoteksto",
"metadata": "metadatenoj",
"non_diegetic": "nediĝeta",
"original": "originala",
"pattern #{id}": "ŝablono #{id}",
"remove media tag: key='{key}' value='{value}'": "forigi aŭdvidan etikedon: ŝlosilo='{key}' valoro='{value}'",
"remove stream #{index}": "forigi fluon #{index}",
"show #{id}": "serio #{id}",
"stereo": "stereo",
"still_image": "senmova bildo",
"sub index": "subindekso",
"subtitle": "subtitolo",
"timed_thumbnails": "tempigitaj bildetoj",
"undefined": "nedifinita",
"unknown": "nekonata",
"video": "video",
"visual_impaired": "vidmalhelpita"
}
}

361
assets/i18n/es.json Normal file
View File

@@ -0,0 +1,361 @@
{
"iso_languages": {
"ABKHAZIAN": "Abjaziano",
"AFAR": "Afar",
"AFRIKAANS": "Afrikaans",
"AKAN": "Akan",
"ALBANIAN": "Albanés",
"AMHARIC": "Ámárico",
"ARABIC": "Árábe",
"ARAGONESE": "Aragonés",
"ARMENIAN": "Armenio",
"ASSAMESE": "Assamais",
"AVARIC": "Avaric",
"AVESTAN": "Avestan",
"AYMARA": "Aymará",
"AZERBAIJANI": "Azerbayano",
"BAMBARA": "Bambara",
"BASHKIR": "Bashkir",
"BASQUE": "Vasco",
"BELARUSIAN": "Bieloruso",
"BENGALI": "Bengalí",
"BISLAMA": "Bislama",
"BOKMAL": "Bokmål",
"BOSNIAN": "Bosnio",
"BRETON": "Bretón",
"BULGARIAN": "Búlgaro",
"BURMESE": "Birmano",
"CATALAN": "Catalan",
"CHAMORRO": "Chamorro",
"CHECHEN": "Checheno",
"CHICHEWA": "Chichewa",
"CHINESE": "Chino",
"CHURCH_SLAVIC": "Church Slavic",
"CHUVASH": "Chuvash",
"CORNISH": "Córnico",
"CORSICAN": "Corso",
"CREE": "Cree",
"CROATIAN": "Croata",
"CZECH": "Checo",
"DANISH": "Danés",
"DIVEHI": "Divehi",
"DUTCH": "Dutch",
"DZONGKHA": "Butaní",
"ENGLISH": "Inglés",
"ESPERANTO": "Esperanto",
"ESTONIAN": "Estonio",
"EWE": "Ewe",
"FAROESE": "Feroés",
"FIJIAN": "Fidji",
"FILIPINO": "Filipino",
"FINNISH": "Finés",
"FRENCH": "Francés",
"FULAH": "Fulah",
"GALICIAN": "Gallego",
"GANDA": "Ganda",
"GEORGIAN": "Georgiano",
"GERMAN": "Alemán",
"GREEK": "Greek",
"GUARANI": "Guaraní",
"GUJARATI": "guyaratí",
"HAITIAN": "Haitian",
"HAUSA": "Haussa",
"HEBREW": "Hebreo",
"HERERO": "Herero",
"HINDI": "Hindi",
"HIRI_MOTU": "Hiri Motu",
"HUNGARIAN": "Húngaro",
"ICELANDIC": "Islandés",
"IDO": "Ido",
"IGBO": "Igbo",
"INDONESIAN": "Indonesio",
"INTERLINGUA": "Interlingua",
"INTERLINGUE": "Interlingue",
"INUKTITUT": "Inuktitut",
"INUPIAQ": "Inupiak",
"IRISH": "Irlandés",
"ITALIAN": "Italiano",
"JAPANESE": "Japonés",
"JAVANESE": "Javanés",
"KALAALLISUT": "Kalaallisut",
"KANNADA": "Canarés",
"KANURI": "Kanuri",
"KASHMIRI": "Kashmir",
"KAZAKH": "Kazako",
"KHMER": "Khmer",
"KIKUYU": "Kikuyu",
"KINYARWANDA": "Kinyarwanda",
"KIRGHIZ": "Kirghiz",
"KOMI": "Komi",
"KONGO": "Kongo",
"KOREAN": "Coreano",
"KUANYAMA": "Kuanyama",
"KURDISH": "Kurdo",
"LAO": "laosiano",
"LATIN": "Latín",
"LATVIAN": "Letón",
"LIMBURGAN": "Limburgan",
"LINGALA": "Lingala",
"LITHUANIAN": "Lituano",
"LUBA_KATANGA": "Luba-Katanga",
"LUXEMBOURGISH": "Luxembourgish",
"MACEDONIAN": "Macedonio",
"MALAGASY": "Malgache",
"MALAY": "Malayo",
"MALAYALAM": "malabar",
"MALTESE": "Maltés",
"MANX": "Manx [Gaélico de Manx]",
"MAORI": "Maorí",
"MARATHI": "Marath",
"MARSHALLESE": "Marshall",
"MONGOLIAN": "Mongol",
"NAURU": "Nauru",
"NAVAJO": "Navajo",
"NDONGA": "Ndonga",
"NEPALI": "Nepalés",
"NORTHERN_SAMI": "Sami del Norte",
"NORTH_NDEBELE": "North Ndebele",
"NORWEGIAN": "Noruego",
"NORWEGIAN_NYNORSK": "Nynorsk",
"OCCITAN": "Occitan",
"OJIBWA": "Ojibwa",
"ORIYA": "Oriya",
"OROMO": "Oromo (Afan)",
"OSSETIAN": "Ossetian",
"PALI": "Pali",
"PANJABI": "Panjabi",
"PERSIAN": "Persa",
"POLISH": "Polaco",
"PORTUGUESE": "Portugués",
"PUSHTO": "Pushto",
"QUECHUA": "Quechua",
"ROMANIAN": "Romanian",
"ROMANSH": "Romaní",
"RUNDI": "Kiroundi",
"RUSSIAN": "Ruso",
"SAMOAN": "Samoano",
"SANGO": "Sango",
"SANSKRIT": "Sánscrito",
"SARDINIAN": "Sardo",
"SCOTTISH_GAELIC": "Scottish Gaelic",
"SERBIAN": "Serbio",
"SHONA": "Shona",
"SICHUAN_YI": "Sichuan Yi",
"SINDHI": "Sindhi",
"SINHALA": "Sinhala",
"SLOVAK": "Eslovaco",
"SLOVENIAN": "Esloveno",
"SOMALI": "Somalí",
"SOUTHERN_SOTHO": "Southern Sotho",
"SOUTH_NDEBELE": "South Ndebele",
"SPANISH": "Spanish",
"SUNDANESE": "Sondanés",
"SWAHILI": "Swahili",
"SWATI": "Siswati",
"SWEDISH": "Sueco",
"TAGALOG": "Tagalo",
"TAHITIAN": "Tahitiano",
"TAJIK": "Tajiko",
"TAMIL": "Tamil",
"TATAR": "Tataro",
"TELUGU": "Telugu",
"THAI": "Tailandés",
"TIBETAN": "Tibetano",
"TIGRINYA": "Tigrinya",
"TONGA": "Tonga",
"TSONGA": "Tsonga",
"TSWANA": "Setchwana",
"TURKISH": "Turco",
"TURKMEN": "Turkmeno",
"TWI": "Tchi",
"UIGHUR": "Uighur",
"UKRAINIAN": "Ukranio",
"UNDEFINED": "undefined",
"URDU": "Urdu",
"UZBEK": "Uzbeko",
"VENDA": "Venda",
"VIETNAMESE": "Vietnamita",
"VOLAPUK": "Volapük",
"WALLOON": "valón",
"WELSH": "Galés",
"WESTERN_FRISIAN": "Frisón occidental",
"WOLOF": "Wolof",
"XHOSA": "Xhosa",
"YIDDISH": "Yidish",
"YORUBA": "Yoruba",
"ZHUANG": "Zhuang",
"ZULU": "Zulu"
},
"phrases": {
"5.0(side)": "5.0(side)",
"5.1(side)": "5.1(side)",
"6.1": "6.1",
"6ch": "6ch",
"7.1": "7.1",
"<New show>": "<Nueva serie>",
"Add": "Añadir",
"Add Pattern": "Añadir patrón",
"Apply": "Aplicar",
"Apply failed: {error}": "Error al aplicar: {error}",
"Are you sure to delete the following filename pattern?": "¿Seguro que quieres eliminar el siguiente patrón de nombre de archivo?",
"Are you sure to delete the following shifted season?": "¿Seguro que quieres eliminar la siguiente temporada desplazada?",
"Are you sure to delete the following show?": "¿Seguro que quieres eliminar la siguiente serie?",
"Are you sure to delete the following {track_type} track?": "¿Seguro que quieres eliminar la pista {track_type} siguiente?",
"Are you sure to delete this tag?": "¿Seguro que quieres eliminar esta etiqueta?",
"Audio Layout": "Disposición de audio",
"Back": "Volver",
"Cancel": "Cancelar",
"Cannot add another stream with disposition flag 'default' or 'forced' set": "No se puede añadir otro flujo con la marca de disposición 'default' o 'forced' activada",
"Changes applied and file reloaded.": "Cambios aplicados y archivo recargado.",
"Cleanup": "Limpieza",
"Cleanup disabled.": "Limpieza desactivada.",
"Cleanup enabled.": "Limpieza activada.",
"Codec": "Códec",
"Continuing edit session.": "Continuando la sesión de edición.",
"Default": "Predeterminado",
"Delete": "Eliminar",
"Delete Show": "Eliminar serie",
"Deleted media tag {tag!r}.": "Etiqueta de medios {tag!r} eliminada.",
"Differences": "Diferencias",
"Differences (file->db/output)": "Diferencias (archivo->BD/salida)",
"Discard": "Descartar",
"Discard pending metadata changes and quit?": "¿Descartar los cambios pendientes de metadatos y salir?",
"Discard pending metadata changes and reload the file state?": "¿Descartar los cambios pendientes de metadatos y recargar el estado del archivo?",
"Down": "Abajo",
"Dry-run: would rewrite via temporary file {target_path}": "Simulación: reescribiría mediante el archivo temporal {target_path}",
"Edit": "Editar",
"Edit Pattern": "Editar patrón",
"Edit Show": "Editar serie",
"Edit filename pattern": "Editar patrón de nombre de archivo",
"Edit shifted season": "Editar temporada desplazada",
"Edit stream": "Editar flujo",
"Episode Offset": "Desplazamiento de episodio",
"Episode offset": "Desplazamiento de episodio",
"File": "Archivo",
"File patterns": "Patrones de archivo",
"First Episode": "Primer episodio",
"First episode": "Primer episodio",
"Forced": "Forzado",
"Help": "Ayuda",
"Help Screen": "Pantalla de ayuda",
"ID": "ID",
"Identify": "Identificar",
"Index": "Índice",
"Index / Subindex": "Índice / Subíndice",
"Index Episode Digits": "Dígitos del índice de episodio",
"Index Season Digits": "Dígitos del índice de temporada",
"Indicator Edisode Digits": "Dígitos del indicador de episodio",
"Indicator Season Digits": "Dígitos del indicador de temporada",
"Keep Editing": "Seguir editando",
"Keeping pending changes.": "Se conservan los cambios pendientes.",
"Key": "Clave",
"Language": "Idioma",
"Last Episode": "Último episodio",
"Last episode": "Último episodio",
"Layout": "Diseño",
"Media Tags": "Etiquetas de medios",
"More than one default audio stream detected and no prompt set": "Se detectó más de un flujo de audio predeterminado y no hay aviso configurado",
"More than one default audio stream detected! Please select stream": "Se detectó más de un flujo de audio predeterminado. Selecciona el flujo",
"More than one default subtitle stream detected and no prompt set": "Se detectó más de un flujo de subtítulos predeterminado y no hay aviso configurado",
"More than one default subtitle stream detected! Please select stream": "Se detectó más de un flujo de subtítulos predeterminado. Selecciona el flujo",
"More than one default video stream detected and no prompt set": "Se detectó más de un flujo de vídeo predeterminado y no hay aviso configurado",
"More than one default video stream detected! Please select stream": "Se detectó más de un flujo de vídeo predeterminado. Selecciona el flujo",
"More than one forced audio stream detected and no prompt set": "Se detectó más de un flujo de audio forzado y no hay aviso configurado",
"More than one forced audio stream detected! Please select stream": "Se detectó más de un flujo de audio forzado. Selecciona el flujo",
"More than one forced subtitle stream detected and no prompt set": "Se detectó más de un flujo de subtítulos forzados y no hay aviso configurado",
"More than one forced subtitle stream detected! Please select stream": "Se detectó más de un flujo de subtítulos forzados. Selecciona el flujo",
"More than one forced video stream detected and no prompt set": "Se detectó más de un flujo de vídeo forzado y no hay aviso configurado",
"More than one forced video stream detected! Please select stream": "Se detectó más de un flujo de vídeo forzado. Selecciona el flujo",
"Name": "Nombre",
"New Pattern": "Nuevo patrón",
"New Show": "Nueva serie",
"New filename pattern": "Nuevo patrón de nombre de archivo",
"New shifted season": "Nueva temporada desplazada",
"New stream": "Nuevo flujo",
"No": "No",
"No changes to apply.": "No hay cambios para aplicar.",
"No changes to revert.": "No hay cambios para revertir.",
"Normalization disabled.": "Normalización desactivada.",
"Normalization enabled.": "Normalización activada.",
"Normalize": "Normalizar",
"Notes": "Notas",
"Pattern": "Patrón",
"Planned Changes (file->edited output)": "Cambios planificados (archivo->salida editada)",
"Quality": "Calidad",
"Quit": "Salir",
"Remove Pattern": "Eliminar patrón",
"Revert": "Revertir",
"Reverted pending changes.": "Se revirtieron los cambios pendientes.",
"Save": "Guardar",
"Season Offset": "Desplazamiento de temporada",
"Select a stream first.": "Selecciona primero un flujo.",
"Set Default": "Establecer como predeterminado",
"Set Forced": "Establecer como forzado",
"Settings Screen": "Pantalla de ajustes",
"Numbering Mapping": "Temporadas desplazadas",
"Show": "Serie",
"Shows": "Series",
"Source Season": "Temporada de origen",
"SrcIndex": "Índice origen",
"Status": "Estado",
"Stay": "Permanecer",
"Stream dispositions": "Disposiciones del flujo",
"Stream tags": "Etiquetas del flujo",
"Streams": "Flujos",
"SubIndex": "Subíndice",
"Substitute": "Sustituir",
"Substitute pattern": "Sustituir patrón",
"Title": "Título",
"Type": "Tipo",
"Unable to update selected stream.": "No se pudo actualizar el flujo seleccionado.",
"Up": "Arriba",
"Update Pattern": "Actualizar patrón",
"Updated media tag {tag!r}.": "Etiqueta de medios {tag!r} actualizada.",
"Updated stream #{index} ({track_type}).": "Flujo #{index} ({track_type}) actualizado.",
"Value": "Valor",
"Year": "Año",
"Yes": "Sí",
"add media tag: key='{key}' value='{value}'": "añadir etiqueta de medios: clave='{key}' valor='{value}'",
"add {track_type} track: index={index} lang={language}": "añadir pista {track_type}: índice={index} idioma={language}",
"attached_pic": "attached_pic",
"attachment": "adjunto",
"audio": "audio",
"captions": "subtítulos",
"change media tag: key='{key}' value='{value}'": "cambiar etiqueta de medios: clave='{key}' valor='{value}'",
"change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}": "cambiar flujo #{index} ({track_type}:{sub_index}) añadir disposición={disposition}",
"change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}": "cambiar flujo #{index} ({track_type}:{sub_index}) añadir clave={key} valor={value}",
"change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}": "cambiar flujo #{index} ({track_type}:{sub_index}) cambiar clave={key} valor={value}",
"change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}": "cambiar flujo #{index} ({track_type}:{sub_index}) quitar disposición={disposition}",
"change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}": "cambiar flujo #{index} ({track_type}:{sub_index}) quitar clave={key} valor={value}",
"clean_effects": "solo efectos",
"comment": "comentario",
"default": "predeterminado",
"dependent": "dependiente",
"descriptions": "descripciones",
"dub": "doblaje",
"for pattern": "para el patrón",
"forced": "forzado",
"from": "de",
"from pattern": "del patrón",
"from show": "de la serie",
"hearing_impaired": "personas con discapacidad auditiva",
"karaoke": "karaoke",
"lyrics": "letra",
"metadata": "metadatos",
"non_diegetic": "no diegético",
"original": "original",
"pattern #{id}": "patrón #{id}",
"remove media tag: key='{key}' value='{value}'": "eliminar etiqueta de medios: clave='{key}' valor='{value}'",
"remove stream #{index}": "eliminar flujo #{index}",
"show #{id}": "serie #{id}",
"stereo": "estéreo",
"still_image": "imagen fija",
"sub index": "subíndice",
"subtitle": "subtítulo",
"timed_thumbnails": "miniaturas temporizadas",
"undefined": "indefinido",
"unknown": "desconocido",
"video": "vídeo",
"visual_impaired": "personas con discapacidad visual"
}
}

361
assets/i18n/fr.json Normal file
View File

@@ -0,0 +1,361 @@
{
"iso_languages": {
"ABKHAZIAN": "Abkhaze",
"AFAR": "Afar",
"AFRIKAANS": "Afrikaans",
"AKAN": "Akan",
"ALBANIAN": "Albanais",
"AMHARIC": "Amharique",
"ARABIC": "Arabe",
"ARAGONESE": "Aragonais",
"ARMENIAN": "Arménien",
"ASSAMESE": "Assamais",
"AVARIC": "Avar",
"AVESTAN": "Avestique",
"AYMARA": "Aymara",
"AZERBAIJANI": "Azéri",
"BAMBARA": "Bambara",
"BASHKIR": "Bachkir",
"BASQUE": "Basque",
"BELARUSIAN": "Biélorusse",
"BENGALI": "Bengali",
"BISLAMA": "Bichelamar",
"BOKMAL": "Bokmål",
"BOSNIAN": "Bosniaque",
"BRETON": "Breton",
"BULGARIAN": "Bulgare",
"BURMESE": "Birman",
"CATALAN": "Catalan",
"CHAMORRO": "Chamorro",
"CHECHEN": "Tchétchène",
"CHICHEWA": "Chichewa",
"CHINESE": "Chinois",
"CHURCH_SLAVIC": "Church Slavic",
"CHUVASH": "Tchouvache",
"CORNISH": "Cornique",
"CORSICAN": "Corse",
"CREE": "Cri",
"CROATIAN": "Croate",
"CZECH": "Tchèque",
"DANISH": "Danois",
"DIVEHI": "Divehi",
"DUTCH": "Dutch",
"DZONGKHA": "Dzongkha",
"ENGLISH": "Anglais",
"ESPERANTO": "Espéranto",
"ESTONIAN": "Estonien",
"EWE": "Éwé",
"FAROESE": "Féroïen",
"FIJIAN": "Fidjien",
"FILIPINO": "Filipino",
"FINNISH": "Finnois",
"FRENCH": "Français",
"FULAH": "Peul",
"GALICIAN": "Galicien",
"GANDA": "Ganda",
"GEORGIAN": "Géorgien",
"GERMAN": "Allemand",
"GREEK": "Greek",
"GUARANI": "Guarani",
"GUJARATI": "Goudjarâtî (Gujrâtî)",
"HAITIAN": "Haitian",
"HAUSA": "Haoussa",
"HEBREW": "Hébreu",
"HERERO": "Herero",
"HINDI": "Hindi",
"HIRI_MOTU": "Hiri Motu",
"HUNGARIAN": "Hongrois",
"ICELANDIC": "Islandais",
"IDO": "Ido",
"IGBO": "Igbo",
"INDONESIAN": "Indonésien",
"INTERLINGUA": "Interlingua",
"INTERLINGUE": "Interlingue",
"INUKTITUT": "Inuktitut",
"INUPIAQ": "Inupiaq",
"IRISH": "Irlandais",
"ITALIAN": "Italien",
"JAPANESE": "Japonais",
"JAVANESE": "Javanais",
"KALAALLISUT": "Kalaallisut",
"KANNADA": "Kannara (Canara)",
"KANURI": "Kanouri",
"KASHMIRI": "Kashmiri",
"KAZAKH": "Kazakh",
"KHMER": "Khmer",
"KIKUYU": "Kikuyu",
"KINYARWANDA": "Kinyarwanda",
"KIRGHIZ": "Kirghiz",
"KOMI": "Komi",
"KONGO": "Kongo",
"KOREAN": "Coréen",
"KUANYAMA": "Kuanyama",
"KURDISH": "Kurde",
"LAO": "Laotien",
"LATIN": "Latin",
"LATVIAN": "Letton",
"LIMBURGAN": "Limburgan",
"LINGALA": "Lingala",
"LITHUANIAN": "Lituanien",
"LUBA_KATANGA": "Luba-katanga",
"LUXEMBOURGISH": "Luxembourgish",
"MACEDONIAN": "Macédonien",
"MALAGASY": "Malgache",
"MALAY": "Malais",
"MALAYALAM": "Malayalam",
"MALTESE": "Maltais",
"MANX": "Mannois",
"MAORI": "Maori",
"MARATHI": "Marathe",
"MARSHALLESE": "Marshallais",
"MONGOLIAN": "Mongol",
"NAURU": "Nauru",
"NAVAJO": "Navajo",
"NDONGA": "Ndonga",
"NEPALI": "Népalais",
"NORTHERN_SAMI": "Same du Nord",
"NORTH_NDEBELE": "North Ndebele",
"NORWEGIAN": "Norvégien",
"NORWEGIAN_NYNORSK": "Nynorsk",
"OCCITAN": "Occitan",
"OJIBWA": "Ojibwa",
"ORIYA": "Oriya",
"OROMO": "Oromo",
"OSSETIAN": "Ossetian",
"PALI": "Pali",
"PANJABI": "Panjabi",
"PERSIAN": "Persan",
"POLISH": "Polonais",
"PORTUGUESE": "Portugais",
"PUSHTO": "Pushto",
"QUECHUA": "Quechua",
"ROMANIAN": "Romanian",
"ROMANSH": "Romanche",
"RUNDI": "Rundi",
"RUSSIAN": "Russe",
"SAMOAN": "Samoan",
"SANGO": "Sango",
"SANSKRIT": "Sanskrit",
"SARDINIAN": "Sarde",
"SCOTTISH_GAELIC": "Scottish Gaelic",
"SERBIAN": "Serbe",
"SHONA": "Shona",
"SICHUAN_YI": "Sichuan Yi",
"SINDHI": "Sindhi",
"SINHALA": "Sinhala",
"SLOVAK": "Slovaque",
"SLOVENIAN": "Slovène",
"SOMALI": "Somali",
"SOUTHERN_SOTHO": "Southern Sotho",
"SOUTH_NDEBELE": "South Ndebele",
"SPANISH": "Spanish",
"SUNDANESE": "Sundanais",
"SWAHILI": "Swahili",
"SWATI": "Swati",
"SWEDISH": "Suédois",
"TAGALOG": "Tagalog",
"TAHITIAN": "Tahitien",
"TAJIK": "Tadjik",
"TAMIL": "Tamoul",
"TATAR": "Tatar",
"TELUGU": "Télougou",
"THAI": "Thaï",
"TIBETAN": "Tibétain",
"TIGRINYA": "Tigrigna",
"TONGA": "Tonga",
"TSONGA": "Tsonga",
"TSWANA": "Tswana",
"TURKISH": "Turc",
"TURKMEN": "Turkmène",
"TWI": "Twi",
"UIGHUR": "Uighur",
"UKRAINIAN": "Ukrainien",
"UNDEFINED": "undefined",
"URDU": "Ourdou",
"UZBEK": "Ouszbek",
"VENDA": "Venda",
"VIETNAMESE": "Vietnamien",
"VOLAPUK": "Volapük",
"WALLOON": "Wallon",
"WELSH": "Gallois",
"WESTERN_FRISIAN": "Frison occidental",
"WOLOF": "Wolof",
"XHOSA": "Xhosa",
"YIDDISH": "Yiddish",
"YORUBA": "Yoruba",
"ZHUANG": "Zhuang",
"ZULU": "Zoulou"
},
"phrases": {
"5.0(side)": "5.0(side)",
"5.1(side)": "5.1(side)",
"6.1": "6.1",
"6ch": "6ch",
"7.1": "7.1",
"<New show>": "<Nouvelle série>",
"Add": "Ajouter",
"Add Pattern": "Ajouter un modèle",
"Apply": "Appliquer",
"Apply failed: {error}": "Échec de l'application : {error}",
"Are you sure to delete the following filename pattern?": "Voulez-vous vraiment supprimer le modèle de nom de fichier suivant ?",
"Are you sure to delete the following shifted season?": "Voulez-vous vraiment supprimer la saison décalée suivante ?",
"Are you sure to delete the following show?": "Voulez-vous vraiment supprimer la série suivante ?",
"Are you sure to delete the following {track_type} track?": "Voulez-vous vraiment supprimer la piste {track_type} suivante ?",
"Are you sure to delete this tag?": "Voulez-vous vraiment supprimer cette balise ?",
"Audio Layout": "Disposition audio",
"Back": "Retour",
"Cancel": "Annuler",
"Cannot add another stream with disposition flag 'default' or 'forced' set": "Impossible d'ajouter un autre flux avec l'indicateur de disposition 'default' ou 'forced'",
"Changes applied and file reloaded.": "Modifications appliquées et fichier rechargé.",
"Cleanup": "Nettoyage",
"Cleanup disabled.": "Nettoyage désactivé.",
"Cleanup enabled.": "Nettoyage activé.",
"Codec": "Codec",
"Continuing edit session.": "Poursuite de la session d'édition.",
"Default": "Par défaut",
"Delete": "Supprimer",
"Delete Show": "Supprimer la série",
"Deleted media tag {tag!r}.": "Balise média {tag!r} supprimée.",
"Differences": "Différences",
"Differences (file->db/output)": "Différences (fichier->BD/sortie)",
"Discard": "Ignorer",
"Discard pending metadata changes and quit?": "Ignorer les modifications de métadonnées en attente et quitter ?",
"Discard pending metadata changes and reload the file state?": "Ignorer les modifications de métadonnées en attente et recharger l'état du fichier ?",
"Down": "Descendre",
"Dry-run: would rewrite via temporary file {target_path}": "Simulation : réécrirait via le fichier temporaire {target_path}",
"Edit": "Modifier",
"Edit Pattern": "Modifier le modèle",
"Edit Show": "Modifier la série",
"Edit filename pattern": "Modifier le modèle de nom de fichier",
"Edit shifted season": "Modifier la saison décalée",
"Edit stream": "Modifier le flux",
"Episode Offset": "Décalage d'épisode",
"Episode offset": "Décalage d'épisode",
"File": "Fichier",
"File patterns": "Modèles de fichiers",
"First Episode": "Premier épisode",
"First episode": "Premier épisode",
"Forced": "Forcé",
"Help": "Aide",
"Help Screen": "Écran d'aide",
"ID": "ID",
"Identify": "Identifier",
"Index": "Index",
"Index / Subindex": "Index / Sous-index",
"Index Episode Digits": "Chiffres d'épisode d'index",
"Index Season Digits": "Chiffres de saison d'index",
"Indicator Edisode Digits": "Chiffres d'épisode de l'indicateur",
"Indicator Season Digits": "Chiffres de saison de l'indicateur",
"Keep Editing": "Continuer l'édition",
"Keeping pending changes.": "Les modifications en attente sont conservées.",
"Key": "Clé",
"Language": "Langue",
"Last Episode": "Dernier épisode",
"Last episode": "Dernier épisode",
"Layout": "Disposition",
"Media Tags": "Balises média",
"More than one default audio stream detected and no prompt set": "Plus d'un flux audio par défaut détecté et aucune invite définie",
"More than one default audio stream detected! Please select stream": "Plus d'un flux audio par défaut détecté ! Veuillez sélectionner un flux",
"More than one default subtitle stream detected and no prompt set": "Plus d'un flux de sous-titres par défaut détecté et aucune invite définie",
"More than one default subtitle stream detected! Please select stream": "Plus d'un flux de sous-titres par défaut détecté ! Veuillez sélectionner un flux",
"More than one default video stream detected and no prompt set": "Plus d'un flux vidéo par défaut détecté et aucune invite définie",
"More than one default video stream detected! Please select stream": "Plus d'un flux vidéo par défaut détecté ! Veuillez sélectionner un flux",
"More than one forced audio stream detected and no prompt set": "Plus d'un flux audio forcé détecté et aucune invite définie",
"More than one forced audio stream detected! Please select stream": "Plus d'un flux audio forcé détecté ! Veuillez sélectionner un flux",
"More than one forced subtitle stream detected and no prompt set": "Plus d'un flux de sous-titres forcé détecté et aucune invite définie",
"More than one forced subtitle stream detected! Please select stream": "Plus d'un flux de sous-titres forcé détecté ! Veuillez sélectionner un flux",
"More than one forced video stream detected and no prompt set": "Plus d'un flux vidéo forcé détecté et aucune invite définie",
"More than one forced video stream detected! Please select stream": "Plus d'un flux vidéo forcé détecté ! Veuillez sélectionner un flux",
"Name": "Nom",
"New Pattern": "Nouveau modèle",
"New Show": "Nouvelle série",
"New filename pattern": "Nouveau modèle de nom de fichier",
"New shifted season": "Nouvelle saison décalée",
"New stream": "Nouveau flux",
"No": "Non",
"No changes to apply.": "Aucune modification à appliquer.",
"No changes to revert.": "Aucune modification à annuler.",
"Normalization disabled.": "Normalisation désactivée.",
"Normalization enabled.": "Normalisation activée.",
"Normalize": "Normaliser",
"Notes": "Notes",
"Pattern": "Modèle",
"Planned Changes (file->edited output)": "Modifications prévues (fichier->sortie modifiée)",
"Quality": "Qualité",
"Quit": "Quitter",
"Remove Pattern": "Supprimer le modèle",
"Revert": "Annuler les modifications",
"Reverted pending changes.": "Modifications en attente annulées.",
"Save": "Enregistrer",
"Season Offset": "Décalage de saison",
"Select a stream first.": "Veuillez d'abord sélectionner un flux.",
"Set Default": "Définir par défaut",
"Set Forced": "Définir comme forcé",
"Settings Screen": "Écran des paramètres",
"Numbering Mapping": "Saisons décalées",
"Show": "Série",
"Shows": "Séries",
"Source Season": "Saison source",
"SrcIndex": "Index source",
"Status": "Statut",
"Stay": "Rester",
"Stream dispositions": "Dispositions des flux",
"Stream tags": "Balises du flux",
"Streams": "Flux",
"SubIndex": "Sous-index",
"Substitute": "Remplacer",
"Substitute pattern": "Remplacer le modèle",
"Title": "Titre",
"Type": "Type",
"Unable to update selected stream.": "Impossible de mettre à jour le flux sélectionné.",
"Up": "Monter",
"Update Pattern": "Mettre à jour le modèle",
"Updated media tag {tag!r}.": "Balise média {tag!r} mise à jour.",
"Updated stream #{index} ({track_type}).": "Flux #{index} ({track_type}) mis à jour.",
"Value": "Valeur",
"Year": "Année",
"Yes": "Oui",
"add media tag: key='{key}' value='{value}'": "ajouter une balise média : clé='{key}' valeur='{value}'",
"add {track_type} track: index={index} lang={language}": "ajouter une piste {track_type} : index={index} langue={language}",
"attached_pic": "attached_pic",
"attachment": "pièce jointe",
"audio": "audio",
"captions": "sous-titres",
"change media tag: key='{key}' value='{value}'": "modifier une balise média : clé='{key}' valeur='{value}'",
"change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}": "modifier le flux #{index} ({track_type}:{sub_index}) ajouter disposition={disposition}",
"change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}": "modifier le flux #{index} ({track_type}:{sub_index}) ajouter clé={key} valeur={value}",
"change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}": "modifier le flux #{index} ({track_type}:{sub_index}) changer clé={key} valeur={value}",
"change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}": "modifier le flux #{index} ({track_type}:{sub_index}) supprimer disposition={disposition}",
"change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}": "modifier le flux #{index} ({track_type}:{sub_index}) supprimer clé={key} valeur={value}",
"clean_effects": "effets seuls",
"comment": "commentaire",
"default": "par défaut",
"dependent": "dépendant",
"descriptions": "descriptions",
"dub": "doublage",
"for pattern": "pour le modèle",
"forced": "forcé",
"from": "de",
"from pattern": "depuis le modèle",
"from show": "depuis la série",
"hearing_impaired": "malentendants",
"karaoke": "karaoké",
"lyrics": "paroles",
"metadata": "métadonnées",
"non_diegetic": "non diégétique",
"original": "original",
"pattern #{id}": "modèle #{id}",
"remove media tag: key='{key}' value='{value}'": "supprimer une balise média : clé='{key}' valeur='{value}'",
"remove stream #{index}": "supprimer le flux #{index}",
"show #{id}": "série #{id}",
"stereo": "stéréo",
"still_image": "image fixe",
"sub index": "sous-index",
"subtitle": "sous-titre",
"timed_thumbnails": "miniatures horodatées",
"undefined": "indéfini",
"unknown": "inconnu",
"video": "vidéo",
"visual_impaired": "malvoyants"
}
}

361
assets/i18n/ja.json Normal file
View File

@@ -0,0 +1,361 @@
{
"iso_languages": {
"ABKHAZIAN": "アブハジア語",
"AFAR": "アファル語",
"AFRIKAANS": "アフリカーンス語",
"AKAN": "アカン語",
"ALBANIAN": "アルバニア語",
"AMHARIC": "アムハラ語",
"ARABIC": "アラビア語",
"ARAGONESE": "アラゴン語",
"ARMENIAN": "アルメニア語",
"ASSAMESE": "アッサム語",
"AVARIC": "アヴァル語",
"AVESTAN": "アヴェスタ語",
"AYMARA": "アイマラ語",
"AZERBAIJANI": "アゼルバイジャン語",
"BAMBARA": "バンバラ語",
"BASHKIR": "バシキール語",
"BASQUE": "バスク語",
"BELARUSIAN": "白ロシア語",
"BENGALI": "ベンガル語",
"BISLAMA": "ビスラマ語",
"BOKMAL": "Bokmål",
"BOSNIAN": "ボスニア語",
"BRETON": "ブルトン語",
"BULGARIAN": "ブルガリア語",
"BURMESE": "ビルマ語",
"CATALAN": "Catalan",
"CHAMORRO": "チャモロ語",
"CHECHEN": "チェチェン語",
"CHICHEWA": "Chichewa",
"CHINESE": "中国語",
"CHURCH_SLAVIC": "Church Slavic",
"CHUVASH": "チュヴァシュ語",
"CORNISH": "コーンウォール語",
"CORSICAN": "コルシカ語",
"CREE": "クリー語",
"CROATIAN": "クロアチア語",
"CZECH": "チェコ語",
"DANISH": "デンマーク語",
"DIVEHI": "Divehi",
"DUTCH": "Dutch",
"DZONGKHA": "ゾンカ語",
"ENGLISH": "英語",
"ESPERANTO": "エスペラント語",
"ESTONIAN": "エストニア語",
"EWE": "エウェ語",
"FAROESE": "フェロー語",
"FIJIAN": "フィジー語",
"FILIPINO": "Filipino",
"FINNISH": "フィン語",
"FRENCH": "フランス語",
"FULAH": "フラ語",
"GALICIAN": "ガリシア語",
"GANDA": "ガンダ語",
"GEORGIAN": "グルジア語",
"GERMAN": "ドイツ語",
"GREEK": "Greek",
"GUARANI": "グアラニー",
"GUJARATI": "グジャラーティー語",
"HAITIAN": "Haitian",
"HAUSA": "ハウサ語",
"HEBREW": "ヘブライ語",
"HERERO": "ヘレロ語",
"HINDI": "ヒンディー語",
"HIRI_MOTU": "ヒリモトゥ語",
"HUNGARIAN": "ハンガリー語",
"ICELANDIC": "アイスランド語",
"IDO": "イド語",
"IGBO": "イボ語",
"INDONESIAN": "インドネシア語",
"INTERLINGUA": "Interlingua",
"INTERLINGUE": "Interlingue",
"INUKTITUT": "イヌクウティトット語",
"INUPIAQ": "イヌピアック語",
"IRISH": "アイルランド語",
"ITALIAN": "イタリア語",
"JAPANESE": "日本語",
"JAVANESE": "ジャワ語",
"KALAALLISUT": "Kalaallisut",
"KANNADA": "カンナダ語",
"KANURI": "カヌリ語",
"KASHMIRI": "カシミーリー語",
"KAZAKH": "カザーフ語",
"KHMER": "Khmer",
"KIKUYU": "Kikuyu",
"KINYARWANDA": "キンヤルワンダ語",
"KIRGHIZ": "Kirghiz",
"KOMI": "コミ語",
"KONGO": "コンゴ語",
"KOREAN": "朝鮮語",
"KUANYAMA": "Kuanyama",
"KURDISH": "クルド語",
"LAO": "ラオ語",
"LATIN": "ラテン語",
"LATVIAN": "ラトビア語",
"LIMBURGAN": "Limburgan",
"LINGALA": "リンガラ語",
"LITHUANIAN": "リトアニア語",
"LUBA_KATANGA": "ルバ語",
"LUXEMBOURGISH": "Luxembourgish",
"MACEDONIAN": "マケドニア語",
"MALAGASY": "マラガシ語",
"MALAY": "マライ語",
"MALAYALAM": "マラヤーラム語",
"MALTESE": "マルタ語",
"MANX": "マン島語",
"MAORI": "マオリ語",
"MARATHI": "マラーティー語",
"MARSHALLESE": "マーシャル語",
"MONGOLIAN": "蒙古語",
"NAURU": "ナウル語",
"NAVAJO": "Navajo",
"NDONGA": "ンドンガ語",
"NEPALI": "ネパール語",
"NORTHERN_SAMI": "北サーミ語",
"NORTH_NDEBELE": "North Ndebele",
"NORWEGIAN": "ノルウェー語",
"NORWEGIAN_NYNORSK": "Nynorsk",
"OCCITAN": "Occitan",
"OJIBWA": "オジブワ語",
"ORIYA": "オリヤー語",
"OROMO": "オロモ語",
"OSSETIAN": "Ossetian",
"PALI": "パーリ語",
"PANJABI": "Panjabi",
"PERSIAN": "ペルシア語",
"POLISH": "ポーランド語",
"PORTUGUESE": "ポルトガル語",
"PUSHTO": "Pushto",
"QUECHUA": "キチュワ語",
"ROMANIAN": "Romanian",
"ROMANSH": "ロマンシュ語",
"RUNDI": "ルンディ語",
"RUSSIAN": "ロシア語",
"SAMOAN": "サモア語",
"SANGO": "サンゴ語",
"SANSKRIT": "梵語",
"SARDINIAN": "サルデーニャ語",
"SCOTTISH_GAELIC": "Scottish Gaelic",
"SERBIAN": "セルビア語",
"SHONA": "ショナ語",
"SICHUAN_YI": "Sichuan Yi",
"SINDHI": "シンディー語",
"SINHALA": "Sinhala",
"SLOVAK": "スロヴァキア語",
"SLOVENIAN": "スロヴェニア語",
"SOMALI": "ソマリ語",
"SOUTHERN_SOTHO": "Southern Sotho",
"SOUTH_NDEBELE": "South Ndebele",
"SPANISH": "Spanish",
"SUNDANESE": "スンダ語",
"SWAHILI": "スワヒリ語",
"SWATI": "シスワティ語",
"SWEDISH": "スウェーデン語",
"TAGALOG": "タガログ語",
"TAHITIAN": "タヒチ語",
"TAJIK": "タジク語",
"TAMIL": "タミル語",
"TATAR": "タタール語",
"TELUGU": "テルグ語",
"THAI": "タイ語",
"TIBETAN": "チベット語",
"TIGRINYA": "ティグリニア語",
"TONGA": "Tonga",
"TSONGA": "ツォンガ語",
"TSWANA": "ツワナ語",
"TURKISH": "トルコ語",
"TURKMEN": "トゥルクメン語",
"TWI": "トウィ語",
"UIGHUR": "Uighur",
"UKRAINIAN": "ウクライナ語",
"UNDEFINED": "undefined",
"URDU": "ウルドゥー語",
"UZBEK": "ウズベク語",
"VENDA": "ベンダ語",
"VIETNAMESE": "ベトナム語",
"VOLAPUK": "ボラピューク語",
"WALLOON": "ワロン語",
"WELSH": "ウェールズ語",
"WESTERN_FRISIAN": "西フリジア語",
"WOLOF": "ウォロフ語",
"XHOSA": "ホサ語",
"YIDDISH": "イディッシュ語",
"YORUBA": "ヨルバ語",
"ZHUANG": "Zhuang",
"ZULU": "ズールー語"
},
"phrases": {
"5.0(side)": "5.0(side)",
"5.1(side)": "5.1(side)",
"6.1": "6.1",
"6ch": "6ch",
"7.1": "7.1",
"<New show>": "<新しい番組>",
"Add": "追加",
"Add Pattern": "パターンを追加",
"Apply": "適用",
"Apply failed: {error}": "適用に失敗しました: {error}",
"Are you sure to delete the following filename pattern?": "次のファイル名パターンを削除してもよろしいですか?",
"Are you sure to delete the following shifted season?": "次のシーズンシフト設定を削除してもよろしいですか?",
"Are you sure to delete the following show?": "次の番組を削除してもよろしいですか?",
"Are you sure to delete the following {track_type} track?": "次の{track_type}ストリームを削除してもよろしいですか?",
"Are you sure to delete this tag?": "このタグを削除してもよろしいですか?",
"Audio Layout": "音声レイアウト",
"Back": "戻る",
"Cancel": "キャンセル",
"Cannot add another stream with disposition flag 'default' or 'forced' set": "default または forced の disposition が設定されたストリームはこれ以上追加できません",
"Changes applied and file reloaded.": "変更を適用し、ファイルを再読み込みしました。",
"Cleanup": "クリーンアップ",
"Cleanup disabled.": "クリーンアップを無効にしました。",
"Cleanup enabled.": "クリーンアップを有効にしました。",
"Codec": "コーデック",
"Continuing edit session.": "編集セッションを続行します。",
"Default": "デフォルト",
"Delete": "削除",
"Delete Show": "番組を削除",
"Deleted media tag {tag!r}.": "メディアタグ {tag!r} を削除しました。",
"Differences": "差分",
"Differences (file->db/output)": "差分 (ファイル->DB/出力)",
"Discard": "破棄",
"Discard pending metadata changes and quit?": "保留中のメタデータ変更を破棄して終了しますか?",
"Discard pending metadata changes and reload the file state?": "保留中のメタデータ変更を破棄してファイル状態を再読み込みしますか?",
"Down": "下へ",
"Dry-run: would rewrite via temporary file {target_path}": "ドライラン: 一時ファイル {target_path} 経由で再書き込みします",
"Edit": "編集",
"Edit Pattern": "パターンを編集",
"Edit Show": "番組を編集",
"Edit filename pattern": "ファイル名パターンを編集",
"Edit shifted season": "シフト済みシーズンを編集",
"Edit stream": "ストリームを編集",
"Episode Offset": "エピソードオフセット",
"Episode offset": "エピソードオフセット",
"File": "ファイル",
"File patterns": "ファイルパターン",
"First Episode": "最初のエピソード",
"First episode": "最初のエピソード",
"Forced": "強制",
"Help": "ヘルプ",
"Help Screen": "ヘルプ画面",
"ID": "ID",
"Identify": "識別",
"Index": "インデックス",
"Index / Subindex": "インデックス / サブインデックス",
"Index Episode Digits": "インデックスのエピソード桁数",
"Index Season Digits": "インデックスのシーズン桁数",
"Indicator Edisode Digits": "インジケーターのエピソード桁数",
"Indicator Season Digits": "インジケーターのシーズン桁数",
"Keep Editing": "編集を続ける",
"Keeping pending changes.": "保留中の変更を保持します。",
"Key": "キー",
"Language": "言語",
"Last Episode": "最後のエピソード",
"Last episode": "最後のエピソード",
"Layout": "レイアウト",
"Media Tags": "メディアタグ",
"More than one default audio stream detected and no prompt set": "デフォルト音声ストリームが複数検出され、プロンプトも設定されていません",
"More than one default audio stream detected! Please select stream": "デフォルト音声ストリームが複数検出されました。ストリームを選択してください",
"More than one default subtitle stream detected and no prompt set": "デフォルト字幕ストリームが複数検出され、プロンプトも設定されていません",
"More than one default subtitle stream detected! Please select stream": "デフォルト字幕ストリームが複数検出されました。ストリームを選択してください",
"More than one default video stream detected and no prompt set": "デフォルト映像ストリームが複数検出され、プロンプトも設定されていません",
"More than one default video stream detected! Please select stream": "デフォルト映像ストリームが複数検出されました。ストリームを選択してください",
"More than one forced audio stream detected and no prompt set": "強制音声ストリームが複数検出され、プロンプトも設定されていません",
"More than one forced audio stream detected! Please select stream": "強制音声ストリームが複数検出されました。ストリームを選択してください",
"More than one forced subtitle stream detected and no prompt set": "強制字幕ストリームが複数検出され、プロンプトも設定されていません",
"More than one forced subtitle stream detected! Please select stream": "強制字幕ストリームが複数検出されました。ストリームを選択してください",
"More than one forced video stream detected and no prompt set": "強制映像ストリームが複数検出され、プロンプトも設定されていません",
"More than one forced video stream detected! Please select stream": "強制映像ストリームが複数検出されました。ストリームを選択してください",
"Name": "名前",
"New Pattern": "新しいパターン",
"New Show": "新しい番組",
"New filename pattern": "新しいファイル名パターン",
"New shifted season": "新しいシーズンシフト",
"New stream": "新しいストリーム",
"No": "いいえ",
"No changes to apply.": "適用する変更はありません。",
"No changes to revert.": "元に戻す変更はありません。",
"Normalization disabled.": "正規化を無効にしました。",
"Normalization enabled.": "正規化を有効にしました。",
"Normalize": "正規化",
"Notes": "メモ",
"Pattern": "パターン",
"Planned Changes (file->edited output)": "予定された変更 (ファイル->編集後出力)",
"Quality": "品質",
"Quit": "終了",
"Remove Pattern": "パターンを削除",
"Revert": "元に戻す",
"Reverted pending changes.": "保留中の変更を元に戻しました。",
"Save": "保存",
"Season Offset": "シーズンオフセット",
"Select a stream first.": "まずストリームを選択してください。",
"Set Default": "デフォルトに設定",
"Set Forced": "強制に設定",
"Settings Screen": "設定画面",
"Numbering Mapping": "シフト済みシーズン",
"Show": "番組",
"Shows": "番組一覧",
"Source Season": "元シーズン",
"SrcIndex": "元インデックス",
"Status": "状態",
"Stay": "このまま",
"Stream dispositions": "ストリーム disposition",
"Stream tags": "ストリームタグ",
"Streams": "ストリーム",
"SubIndex": "サブインデックス",
"Substitute": "置換",
"Substitute pattern": "パターンを置換",
"Title": "タイトル",
"Type": "タイプ",
"Unable to update selected stream.": "選択したストリームを更新できませんでした。",
"Up": "上へ",
"Update Pattern": "パターンを更新",
"Updated media tag {tag!r}.": "メディアタグ {tag!r} を更新しました。",
"Updated stream #{index} ({track_type}).": "ストリーム #{index} ({track_type}) を更新しました。",
"Value": "値",
"Year": "年",
"Yes": "はい",
"add media tag: key='{key}' value='{value}'": "メディアタグを追加: key='{key}' value='{value}'",
"add {track_type} track: index={index} lang={language}": "{track_type}ストリームを追加: index={index} lang={language}",
"attached_pic": "attached_pic",
"attachment": "添付",
"audio": "音声",
"captions": "キャプション",
"change media tag: key='{key}' value='{value}'": "メディアタグを変更: key='{key}' value='{value}'",
"change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}": "ストリーム #{index} ({track_type}:{sub_index}) disposition を追加={disposition}",
"change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}": "ストリーム #{index} ({track_type}:{sub_index}) key を追加={key} value={value}",
"change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}": "ストリーム #{index} ({track_type}:{sub_index}) key を変更={key} value={value}",
"change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}": "ストリーム #{index} ({track_type}:{sub_index}) disposition を削除={disposition}",
"change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}": "ストリーム #{index} ({track_type}:{sub_index}) key を削除={key} value={value}",
"clean_effects": "効果音のみ",
"comment": "コメント",
"default": "デフォルト",
"dependent": "依存",
"descriptions": "解説",
"dub": "吹替",
"for pattern": "パターン用",
"forced": "強制",
"from": "元",
"from pattern": "パターンから",
"from show": "番組から",
"hearing_impaired": "聴覚障害者向け",
"karaoke": "カラオケ",
"lyrics": "歌詞",
"metadata": "メタデータ",
"non_diegetic": "非ダイジェティック",
"original": "オリジナル",
"pattern #{id}": "パターン #{id}",
"remove media tag: key='{key}' value='{value}'": "メディアタグを削除: key='{key}' value='{value}'",
"remove stream #{index}": "ストリーム #{index} を削除",
"show #{id}": "番組 #{id}",
"stereo": "ステレオ",
"still_image": "静止画",
"sub index": "サブインデックス",
"subtitle": "字幕",
"timed_thumbnails": "時間指定サムネイル",
"undefined": "未定義",
"unknown": "不明",
"video": "映像",
"visual_impaired": "視覚障害者向け"
}
}

361
assets/i18n/nb.json Normal file
View File

@@ -0,0 +1,361 @@
{
"iso_languages": {
"ABKHAZIAN": "Abkhazian",
"AFAR": "afar",
"AFRIKAANS": "Afrikansk",
"AKAN": "Akan",
"ALBANIAN": "Albansk",
"AMHARIC": "Amharic",
"ARABIC": "Arabisk",
"ARAGONESE": "aragonsk",
"ARMENIAN": "armensk",
"ASSAMESE": "assamisk",
"AVARIC": "Avaric",
"AVESTAN": "avestisk",
"AYMARA": "aymara",
"AZERBAIJANI": "Aserbadjansk",
"BAMBARA": "bambara",
"BASHKIR": "basjkirsk",
"BASQUE": "Baskisk",
"BELARUSIAN": "Hviterussisk",
"BENGALI": "bengali",
"BISLAMA": "bislama",
"BOKMAL": "Bokmål",
"BOSNIAN": "Bosnisk",
"BRETON": "Breton",
"BULGARIAN": "Bulgarsk",
"BURMESE": "burmesisk",
"CATALAN": "Catalan",
"CHAMORRO": "chamorro",
"CHECHEN": "Chechen",
"CHICHEWA": "Chichewa",
"CHINESE": "Kinesisk",
"CHURCH_SLAVIC": "Church Slavic",
"CHUVASH": "tsjuvansk",
"CORNISH": "Cornish",
"CORSICAN": "Korsikansk",
"CREE": "Cree",
"CROATIAN": "Kroatsisk",
"CZECH": "Tjekkisk",
"DANISH": "Dansk",
"DIVEHI": "Divehi",
"DUTCH": "Dutch",
"DZONGKHA": "dzongkha",
"ENGLISH": "Engelsk",
"ESPERANTO": "Esperanto",
"ESTONIAN": "Estonsk",
"EWE": "ewe",
"FAROESE": "færøysk",
"FIJIAN": "fijiansk",
"FILIPINO": "Filipino",
"FINNISH": "Finsk",
"FRENCH": "Fransk",
"FULAH": "fulani",
"GALICIAN": "Galisisk",
"GANDA": "ganda",
"GEORGIAN": "Georgisk",
"GERMAN": "Tysk",
"GREEK": "Greek",
"GUARANI": "Guarani",
"GUJARATI": "gujarati",
"HAITIAN": "Haitian",
"HAUSA": "Hausa",
"HEBREW": "Hebraisk",
"HERERO": "Herero",
"HINDI": "hindi",
"HIRI_MOTU": "Hiri Motu",
"HUNGARIAN": "Ungarsk",
"ICELANDIC": "Islandsk",
"IDO": "ido",
"IGBO": "ibo",
"INDONESIAN": "Indonesisk",
"INTERLINGUA": "Interlingua",
"INTERLINGUE": "Interlingue",
"INUKTITUT": "inuktitut",
"INUPIAQ": "unupiak",
"IRISH": "Irsk",
"ITALIAN": "Italiensk",
"JAPANESE": "Japansk",
"JAVANESE": "Javanesisk",
"KALAALLISUT": "Kalaallisut",
"KANNADA": "kannada",
"KANURI": "Kanuri",
"KASHMIRI": "kasjmiri",
"KAZAKH": "kasakhisk",
"KHMER": "Khmer",
"KIKUYU": "Kikuyu",
"KINYARWANDA": "kinjarwanda",
"KIRGHIZ": "Kirghiz",
"KOMI": "komi",
"KONGO": "kikongo",
"KOREAN": "Koreansk",
"KUANYAMA": "Kuanyama",
"KURDISH": "Kurdisk",
"LAO": "laotisk",
"LATIN": "Latin",
"LATVIAN": "Latvisk",
"LIMBURGAN": "Limburgan",
"LINGALA": "lingala",
"LITHUANIAN": "Lituaisk",
"LUBA_KATANGA": "luba-katanga",
"LUXEMBOURGISH": "Luxembourgish",
"MACEDONIAN": "Makedonsk",
"MALAGASY": "madagassisk",
"MALAY": "malayisk",
"MALAYALAM": "malayalam",
"MALTESE": "Maltisk",
"MANX": "manx",
"MAORI": "Maori",
"MARATHI": "Marathi",
"MARSHALLESE": "Marshallese",
"MONGOLIAN": "Mongolsk",
"NAURU": "nauru",
"NAVAJO": "Navajo",
"NDONGA": "Ndonga",
"NEPALI": "nepalsk",
"NORTHERN_SAMI": "nordsamisk",
"NORTH_NDEBELE": "North Ndebele",
"NORWEGIAN": "Norsk",
"NORWEGIAN_NYNORSK": "Nynorsk",
"OCCITAN": "Occitan",
"OJIBWA": "ojibwa",
"ORIYA": "oriya",
"OROMO": "oromo",
"OSSETIAN": "Ossetian",
"PALI": "Pali",
"PANJABI": "Panjabi",
"PERSIAN": "Persisk",
"POLISH": "Polsk",
"PORTUGUESE": "Portugisisk",
"PUSHTO": "Pushto",
"QUECHUA": "quechua",
"ROMANIAN": "Romanian",
"ROMANSH": "Romansh",
"RUNDI": "rundi",
"RUSSIAN": "Russisk",
"SAMOAN": "samoansk",
"SANGO": "sango",
"SANSKRIT": "sanskrit",
"SARDINIAN": "Sardinsk",
"SCOTTISH_GAELIC": "Scottish Gaelic",
"SERBIAN": "Serbisk",
"SHONA": "Shona",
"SICHUAN_YI": "Sichuan Yi",
"SINDHI": "sindhi",
"SINHALA": "Sinhala",
"SLOVAK": "Slovakisk",
"SLOVENIAN": "Slovensk",
"SOMALI": "somalisk",
"SOUTHERN_SOTHO": "Southern Sotho",
"SOUTH_NDEBELE": "South Ndebele",
"SPANISH": "Spanish",
"SUNDANESE": "sundanesisk",
"SWAHILI": "swahili",
"SWATI": "swati",
"SWEDISH": "Svensk",
"TAGALOG": "tagalog",
"TAHITIAN": "Tahitisk",
"TAJIK": "Tajik",
"TAMIL": "Tamilsk",
"TATAR": "tatarisk",
"TELUGU": "telugu",
"THAI": "Thai",
"TIBETAN": "tibetansk",
"TIGRINYA": "Tigrinya",
"TONGA": "Tonga",
"TSONGA": "tsonga",
"TSWANA": "tswana",
"TURKISH": "Tyrkisk",
"TURKMEN": "turkmensk",
"TWI": "twi",
"UIGHUR": "Uighur",
"UKRAINIAN": "Ukrainsk",
"UNDEFINED": "undefined",
"URDU": "urdu",
"UZBEK": "usbekisk",
"VENDA": "venda",
"VIETNAMESE": "Vietnamesisk",
"VOLAPUK": "Volapük",
"WALLOON": "Vietnamesisk",
"WELSH": "Walisisk",
"WESTERN_FRISIAN": "Vestfrisisk",
"WOLOF": "wolof",
"XHOSA": "Xhosa",
"YIDDISH": "jiddisk",
"YORUBA": "joruba",
"ZHUANG": "Zhuang",
"ZULU": "Zulu"
},
"phrases": {
"5.0(side)": "5.0(side)",
"5.1(side)": "5.1(side)",
"6.1": "6.1",
"6ch": "6ch",
"7.1": "7.1",
"<New show>": "<Ny serie>",
"Add": "Legg til",
"Add Pattern": "Legg til mønster",
"Apply": "Bruk",
"Apply failed: {error}": "Kunne ikke bruke endringene: {error}",
"Are you sure to delete the following filename pattern?": "Er du sikker på at du vil slette følgende filnavnmønster?",
"Are you sure to delete the following shifted season?": "Er du sikker på at du vil slette følgende forskjøvede sesong?",
"Are you sure to delete the following show?": "Er du sikker på at du vil slette følgende serie?",
"Are you sure to delete the following {track_type} track?": "Er du sikker på at du vil slette følgende {track_type}-spor?",
"Are you sure to delete this tag?": "Er du sikker på at du vil slette denne taggen?",
"Audio Layout": "Lydoppsett",
"Back": "Tilbake",
"Cancel": "Avbryt",
"Cannot add another stream with disposition flag 'default' or 'forced' set": "Kan ikke legge til en ny strøm med disposisjonsflagget 'default' eller 'forced' satt",
"Changes applied and file reloaded.": "Endringene er brukt og filen er lastet inn på nytt.",
"Cleanup": "Rydd opp",
"Cleanup disabled.": "Rydding deaktivert.",
"Cleanup enabled.": "Rydding aktivert.",
"Codec": "Kodek",
"Continuing edit session.": "Fortsetter redigeringsøkten.",
"Default": "Standard",
"Delete": "Slett",
"Delete Show": "Slett serie",
"Deleted media tag {tag!r}.": "Mediataggen {tag!r} ble slettet.",
"Differences": "Forskjeller",
"Differences (file->db/output)": "Forskjeller (fil->DB/utdata)",
"Discard": "Forkast",
"Discard pending metadata changes and quit?": "Forkaste ventende metadataendringer og avslutte?",
"Discard pending metadata changes and reload the file state?": "Forkaste ventende metadataendringer og laste filtilstanden på nytt?",
"Down": "Ned",
"Dry-run: would rewrite via temporary file {target_path}": "Tørrkjøring: ville skrevet om via midlertidig fil {target_path}",
"Edit": "Rediger",
"Edit Pattern": "Rediger mønster",
"Edit Show": "Rediger serie",
"Edit filename pattern": "Rediger filnavnmønster",
"Edit shifted season": "Rediger forskjøvet sesong",
"Edit stream": "Rediger strøm",
"Episode Offset": "Episodeforskyvning",
"Episode offset": "Episodeforskyvning",
"File": "Fil",
"File patterns": "Filmønstre",
"First Episode": "Første episode",
"First episode": "Første episode",
"Forced": "Tvungen",
"Help": "Hjelp",
"Help Screen": "Hjelpeskjerm",
"ID": "ID",
"Identify": "Identifiser",
"Index": "Indeks",
"Index / Subindex": "Indeks / Underindeks",
"Index Episode Digits": "Siffer for episodeindeks",
"Index Season Digits": "Siffer for sesongindeks",
"Indicator Edisode Digits": "Siffer for episodeindikator",
"Indicator Season Digits": "Siffer for sesongindikator",
"Keep Editing": "Fortsett redigeringen",
"Keeping pending changes.": "Beholder ventende endringer.",
"Key": "Nøkkel",
"Language": "Språk",
"Last Episode": "Siste episode",
"Last episode": "Siste episode",
"Layout": "Oppsett",
"Media Tags": "Mediatagger",
"More than one default audio stream detected and no prompt set": "Mer enn én standard lydstrøm funnet og ingen forespørsel satt",
"More than one default audio stream detected! Please select stream": "Mer enn én standard lydstrøm funnet. Velg strøm",
"More than one default subtitle stream detected and no prompt set": "Mer enn én standard undertekststrøm funnet og ingen forespørsel satt",
"More than one default subtitle stream detected! Please select stream": "Mer enn én standard undertekststrøm funnet. Velg strøm",
"More than one default video stream detected and no prompt set": "Mer enn én standard videostrøm funnet og ingen forespørsel satt",
"More than one default video stream detected! Please select stream": "Mer enn én standard videostrøm funnet. Velg strøm",
"More than one forced audio stream detected and no prompt set": "Mer enn én tvungen lydstrøm funnet og ingen forespørsel satt",
"More than one forced audio stream detected! Please select stream": "Mer enn én tvungen lydstrøm funnet. Velg strøm",
"More than one forced subtitle stream detected and no prompt set": "Mer enn én tvungen undertekststrøm funnet og ingen forespørsel satt",
"More than one forced subtitle stream detected! Please select stream": "Mer enn én tvungen undertekststrøm funnet. Velg strøm",
"More than one forced video stream detected and no prompt set": "Mer enn én tvungen videostrøm funnet og ingen forespørsel satt",
"More than one forced video stream detected! Please select stream": "Mer enn én tvungen videostrøm funnet. Velg strøm",
"Name": "Navn",
"New Pattern": "Nytt mønster",
"New Show": "Ny serie",
"New filename pattern": "Nytt filnavnmønster",
"New shifted season": "Ny forskjøvet sesong",
"New stream": "Ny strøm",
"No": "Nei",
"No changes to apply.": "Ingen endringer å bruke.",
"No changes to revert.": "Ingen endringer å tilbakestille.",
"Normalization disabled.": "Normalisering deaktivert.",
"Normalization enabled.": "Normalisering aktivert.",
"Normalize": "Normaliser",
"Notes": "Notater",
"Pattern": "Mønster",
"Planned Changes (file->edited output)": "Planlagte endringer (fil->redigert utdata)",
"Quality": "Kvalitet",
"Quit": "Avslutt",
"Remove Pattern": "Fjern mønster",
"Revert": "Tilbakestill",
"Reverted pending changes.": "Ventende endringer ble tilbakestilt.",
"Save": "Lagre",
"Season Offset": "Sesongforskyvning",
"Select a stream first.": "Velg en strøm først.",
"Set Default": "Sett som standard",
"Set Forced": "Sett som tvungen",
"Settings Screen": "Innstillingsskjerm",
"Numbering Mapping": "Forskjøvne sesonger",
"Show": "Serie",
"Shows": "Serier",
"Source Season": "Kildesesong",
"SrcIndex": "Kildeindeks",
"Status": "Status",
"Stay": "Bli",
"Stream dispositions": "Strømdisposisjoner",
"Stream tags": "Strømtagger",
"Streams": "Strømmer",
"SubIndex": "Underindeks",
"Substitute": "Erstatt",
"Substitute pattern": "Erstatt mønster",
"Title": "Tittel",
"Type": "Type",
"Unable to update selected stream.": "Kunne ikke oppdatere valgt strøm.",
"Up": "Opp",
"Update Pattern": "Oppdater mønster",
"Updated media tag {tag!r}.": "Mediataggen {tag!r} ble oppdatert.",
"Updated stream #{index} ({track_type}).": "Strøm #{index} ({track_type}) oppdatert.",
"Value": "Verdi",
"Year": "År",
"Yes": "Ja",
"add media tag: key='{key}' value='{value}'": "legg til mediatagg: nøkkel='{key}' verdi='{value}'",
"add {track_type} track: index={index} lang={language}": "legg til {track_type}-spor: indeks={index} språk={language}",
"attached_pic": "attached_pic",
"attachment": "vedlegg",
"audio": "lyd",
"captions": "teksting",
"change media tag: key='{key}' value='{value}'": "endre mediatagg: nøkkel='{key}' verdi='{value}'",
"change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}": "endre strøm #{index} ({track_type}:{sub_index}) legg til disposisjon={disposition}",
"change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}": "endre strøm #{index} ({track_type}:{sub_index}) legg til nøkkel={key} verdi={value}",
"change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}": "endre strøm #{index} ({track_type}:{sub_index}) endre nøkkel={key} verdi={value}",
"change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}": "endre strøm #{index} ({track_type}:{sub_index}) fjern disposisjon={disposition}",
"change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}": "endre strøm #{index} ({track_type}:{sub_index}) fjern nøkkel={key} verdi={value}",
"clean_effects": "bare effekter",
"comment": "kommentar",
"default": "standard",
"dependent": "avhengig",
"descriptions": "beskrivelser",
"dub": "dubbet",
"for pattern": "for mønster",
"forced": "tvungen",
"from": "fra",
"from pattern": "fra mønster",
"from show": "fra serie",
"hearing_impaired": "hørselshemmet",
"karaoke": "karaoke",
"lyrics": "sangtekst",
"metadata": "metadata",
"non_diegetic": "ikke-diegetisk",
"original": "original",
"pattern #{id}": "mønster #{id}",
"remove media tag: key='{key}' value='{value}'": "fjern mediatagg: nøkkel='{key}' verdi='{value}'",
"remove stream #{index}": "fjern strøm #{index}",
"show #{id}": "serie #{id}",
"stereo": "stereo",
"still_image": "stillbilde",
"sub index": "underindeks",
"subtitle": "undertekst",
"timed_thumbnails": "tidsbestemte miniatyrer",
"undefined": "udefinert",
"unknown": "ukjent",
"video": "video",
"visual_impaired": "synshemmet"
}
}

361
assets/i18n/pt.json Normal file
View File

@@ -0,0 +1,361 @@
{
"iso_languages": {
"ABKHAZIAN": "abkhazian",
"AFAR": "afar",
"AFRIKAANS": "Africanos",
"AKAN": "Akan",
"ALBANIAN": "Albanês",
"AMHARIC": "Amárico",
"ARABIC": "Árabe",
"ARAGONESE": "Aragonês",
"ARMENIAN": "arménio",
"ASSAMESE": "assamês",
"AVARIC": "Avárico",
"AVESTAN": "avéstico",
"AYMARA": "aimara",
"AZERBAIJANI": "Azerbaijani",
"BAMBARA": "bambara",
"BASHKIR": "bashkir",
"BASQUE": "Basco",
"BELARUSIAN": "Bielorusso",
"BENGALI": "Bengali",
"BISLAMA": "bislamá",
"BOKMAL": "Bokmål",
"BOSNIAN": "Bósnio",
"BRETON": "Bretão",
"BULGARIAN": "Búlgaro",
"BURMESE": "birmanês",
"CATALAN": "Catalan",
"CHAMORRO": "chamorro",
"CHECHEN": "Checheno",
"CHICHEWA": "Chichewa",
"CHINESE": "Chinês",
"CHURCH_SLAVIC": "Church Slavic",
"CHUVASH": "chuvash",
"CORNISH": "Córnico",
"CORSICAN": "córsico",
"CREE": "Cree",
"CROATIAN": "Croata",
"CZECH": "Checo",
"DANISH": "Dinamarquês",
"DIVEHI": "Divehi",
"DUTCH": "Dutch",
"DZONGKHA": "dzonga",
"ENGLISH": "Inglês",
"ESPERANTO": "Esperanto",
"ESTONIAN": "Estoniano",
"EWE": "eve",
"FAROESE": "Faroês",
"FIJIAN": "fijiano",
"FILIPINO": "Filipino",
"FINNISH": "Finlandês",
"FRENCH": "Francês",
"FULAH": "fula",
"GALICIAN": "Galego",
"GANDA": "luganda",
"GEORGIAN": "georgiano",
"GERMAN": "Alemão",
"GREEK": "Greek",
"GUARANI": "Guarani",
"GUJARATI": "Guzerate",
"HAITIAN": "Haitian",
"HAUSA": "Hauçá",
"HEBREW": "Hebreu",
"HERERO": "Hereró",
"HINDI": "Hindi",
"HIRI_MOTU": "Hiri Motu",
"HUNGARIAN": "Húngaro",
"ICELANDIC": "Islandês",
"IDO": "ido",
"IGBO": "ibo",
"INDONESIAN": "Indonésio",
"INTERLINGUA": "Interlingua",
"INTERLINGUE": "Interlingue",
"INUKTITUT": "inuktitut",
"INUPIAQ": "Inupiaque",
"IRISH": "Irlandês",
"ITALIAN": "Italiano",
"JAPANESE": "Japonês",
"JAVANESE": "Javanês",
"KALAALLISUT": "Kalaallisut",
"KANNADA": "Kannada",
"KANURI": "Canúri",
"KASHMIRI": "kashmiri",
"KAZAKH": "cazaque",
"KHMER": "Khmer",
"KIKUYU": "Kikuyu",
"KINYARWANDA": "kinyarwanda",
"KIRGHIZ": "Kirghiz",
"KOMI": "komi",
"KONGO": "congolês",
"KOREAN": "Coreano",
"KUANYAMA": "Kuanyama",
"KURDISH": "Curdo",
"LAO": "Laosiano",
"LATIN": "Latim",
"LATVIAN": "Letão",
"LIMBURGAN": "Limburgan",
"LINGALA": "Lingala",
"LITHUANIAN": "Lituano",
"LUBA_KATANGA": "luba-catanga",
"LUXEMBOURGISH": "Luxembourgish",
"MACEDONIAN": "Macedônio",
"MALAGASY": "malgaxe",
"MALAY": "Malaio",
"MALAYALAM": "malaiala",
"MALTESE": "Maltês",
"MANX": "Manx",
"MAORI": "Maori",
"MARATHI": "marata",
"MARSHALLESE": "Marshalês",
"MONGOLIAN": "Mongol",
"NAURU": "nauruano",
"NAVAJO": "Navajo",
"NDONGA": "dongo",
"NEPALI": "Nepalês",
"NORTHERN_SAMI": "northern sami",
"NORTH_NDEBELE": "North Ndebele",
"NORWEGIAN": "Norueguês",
"NORWEGIAN_NYNORSK": "Nynorsk",
"OCCITAN": "Occitan",
"OJIBWA": "ojibwa",
"ORIYA": "oriya",
"OROMO": "Oromo",
"OSSETIAN": "Ossetian",
"PALI": "Páli",
"PANJABI": "Panjabi",
"PERSIAN": "Persa",
"POLISH": "Polaco",
"PORTUGUESE": "Português",
"PUSHTO": "Pushto",
"QUECHUA": "quíchua",
"ROMANIAN": "Romanian",
"ROMANSH": "Romanche",
"RUNDI": "rundi",
"RUSSIAN": "Russo",
"SAMOAN": "Samoano",
"SANGO": "sango",
"SANSKRIT": "Sânscrito",
"SARDINIAN": "Sardo",
"SCOTTISH_GAELIC": "Scottish Gaelic",
"SERBIAN": "Sérvio",
"SHONA": "Xona",
"SICHUAN_YI": "Sichuan Yi",
"SINDHI": "sindi",
"SINHALA": "Sinhala",
"SLOVAK": "Eslovaco",
"SLOVENIAN": "Eslovêno",
"SOMALI": "somali",
"SOUTHERN_SOTHO": "Southern Sotho",
"SOUTH_NDEBELE": "South Ndebele",
"SPANISH": "Spanish",
"SUNDANESE": "sundanês",
"SWAHILI": "suaíli",
"SWATI": "swati",
"SWEDISH": "Sueco",
"TAGALOG": "Tagalo",
"TAHITIAN": "Taitiano",
"TAJIK": "Tadjique",
"TAMIL": "Tâmil",
"TATAR": "tatar",
"TELUGU": "Telugu",
"THAI": "Tailandês",
"TIBETAN": "tibetano",
"TIGRINYA": "Tigrínia",
"TONGA": "Tonga",
"TSONGA": "tsonga",
"TSWANA": "tswana",
"TURKISH": "Turco",
"TURKMEN": "turcomano",
"TWI": "twi",
"UIGHUR": "Uighur",
"UKRAINIAN": "Ucraniano",
"UNDEFINED": "undefined",
"URDU": "urdu",
"UZBEK": "usbeque",
"VENDA": "venda",
"VIETNAMESE": "Vietnamita",
"VOLAPUK": "Volapuque",
"WALLOON": "walloon",
"WELSH": "galês",
"WESTERN_FRISIAN": "Frísio ocidental",
"WOLOF": "uolofe",
"XHOSA": "xosa",
"YIDDISH": "iídiche",
"YORUBA": "ioruba",
"ZHUANG": "Zhuang",
"ZULU": "zulu"
},
"phrases": {
"5.0(side)": "5.0(side)",
"5.1(side)": "5.1(side)",
"6.1": "6.1",
"6ch": "6ch",
"7.1": "7.1",
"<New show>": "<Nova série>",
"Add": "Adicionar",
"Add Pattern": "Adicionar padrão",
"Apply": "Aplicar",
"Apply failed: {error}": "Falha ao aplicar: {error}",
"Are you sure to delete the following filename pattern?": "Tem certeza de que deseja excluir o seguinte padrão de nome de arquivo?",
"Are you sure to delete the following shifted season?": "Tem certeza de que deseja excluir a seguinte temporada deslocada?",
"Are you sure to delete the following show?": "Tem certeza de que deseja excluir a seguinte série?",
"Are you sure to delete the following {track_type} track?": "Tem certeza de que deseja excluir a seguinte faixa {track_type}?",
"Are you sure to delete this tag?": "Tem certeza de que deseja excluir esta tag?",
"Audio Layout": "Layout de áudio",
"Back": "Voltar",
"Cancel": "Cancelar",
"Cannot add another stream with disposition flag 'default' or 'forced' set": "Não é possível adicionar outro fluxo com a flag de disposição 'default' ou 'forced' definida",
"Changes applied and file reloaded.": "Alterações aplicadas e arquivo recarregado.",
"Cleanup": "Limpeza",
"Cleanup disabled.": "Limpeza desativada.",
"Cleanup enabled.": "Limpeza ativada.",
"Codec": "Codec",
"Continuing edit session.": "Continuando a sessão de edição.",
"Default": "Padrão",
"Delete": "Excluir",
"Delete Show": "Excluir série",
"Deleted media tag {tag!r}.": "Tag de mídia {tag!r} excluída.",
"Differences": "Diferenças",
"Differences (file->db/output)": "Diferenças (arquivo->BD/saída)",
"Discard": "Descartar",
"Discard pending metadata changes and quit?": "Descartar alterações pendentes de metadados e sair?",
"Discard pending metadata changes and reload the file state?": "Descartar alterações pendentes de metadados e recarregar o estado do arquivo?",
"Down": "Baixo",
"Dry-run: would rewrite via temporary file {target_path}": "Execução simulada: regravaria via arquivo temporário {target_path}",
"Edit": "Editar",
"Edit Pattern": "Editar padrão",
"Edit Show": "Editar série",
"Edit filename pattern": "Editar padrão de nome de arquivo",
"Edit shifted season": "Editar temporada deslocada",
"Edit stream": "Editar fluxo",
"Episode Offset": "Deslocamento de episódio",
"Episode offset": "Deslocamento de episódio",
"File": "Arquivo",
"File patterns": "Padrões de arquivo",
"First Episode": "Primeiro episódio",
"First episode": "Primeiro episódio",
"Forced": "Forçado",
"Help": "Ajuda",
"Help Screen": "Tela de ajuda",
"ID": "ID",
"Identify": "Identificar",
"Index": "Índice",
"Index / Subindex": "Índice / Subíndice",
"Index Episode Digits": "Dígitos do índice do episódio",
"Index Season Digits": "Dígitos do índice da temporada",
"Indicator Edisode Digits": "Dígitos do indicador do episódio",
"Indicator Season Digits": "Dígitos do indicador da temporada",
"Keep Editing": "Continuar editando",
"Keeping pending changes.": "Mantendo alterações pendentes.",
"Key": "Chave",
"Language": "Idioma",
"Last Episode": "Último episódio",
"Last episode": "Último episódio",
"Layout": "Layout",
"Media Tags": "Tags de mídia",
"More than one default audio stream detected and no prompt set": "Mais de um fluxo de áudio padrão detectado e nenhum prompt definido",
"More than one default audio stream detected! Please select stream": "Mais de um fluxo de áudio padrão detectado! Selecione o fluxo",
"More than one default subtitle stream detected and no prompt set": "Mais de um fluxo de legenda padrão detectado e nenhum prompt definido",
"More than one default subtitle stream detected! Please select stream": "Mais de um fluxo de legenda padrão detectado! Selecione o fluxo",
"More than one default video stream detected and no prompt set": "Mais de um fluxo de vídeo padrão detectado e nenhum prompt definido",
"More than one default video stream detected! Please select stream": "Mais de um fluxo de vídeo padrão detectado! Selecione o fluxo",
"More than one forced audio stream detected and no prompt set": "Mais de um fluxo de áudio forçado detectado e nenhum prompt definido",
"More than one forced audio stream detected! Please select stream": "Mais de um fluxo de áudio forçado detectado! Selecione o fluxo",
"More than one forced subtitle stream detected and no prompt set": "Mais de um fluxo de legenda forçada detectado e nenhum prompt definido",
"More than one forced subtitle stream detected! Please select stream": "Mais de um fluxo de legenda forçada detectado! Selecione o fluxo",
"More than one forced video stream detected and no prompt set": "Mais de um fluxo de vídeo forçado detectado e nenhum prompt definido",
"More than one forced video stream detected! Please select stream": "Mais de um fluxo de vídeo forçado detectado! Selecione o fluxo",
"Name": "Nome",
"New Pattern": "Novo padrão",
"New Show": "Nova série",
"New filename pattern": "Novo padrão de nome de arquivo",
"New shifted season": "Nova temporada deslocada",
"New stream": "Novo fluxo",
"No": "Não",
"No changes to apply.": "Nenhuma alteração para aplicar.",
"No changes to revert.": "Nenhuma alteração para reverter.",
"Normalization disabled.": "Normalização desativada.",
"Normalization enabled.": "Normalização ativada.",
"Normalize": "Normalizar",
"Notes": "Notas",
"Pattern": "Padrão",
"Planned Changes (file->edited output)": "Alterações planejadas (arquivo->saída editada)",
"Quality": "Qualidade",
"Quit": "Sair",
"Remove Pattern": "Remover padrão",
"Revert": "Reverter",
"Reverted pending changes.": "Alterações pendentes revertidas.",
"Save": "Salvar",
"Season Offset": "Deslocamento de temporada",
"Select a stream first.": "Selecione um fluxo primeiro.",
"Set Default": "Definir como padrão",
"Set Forced": "Definir como forçado",
"Settings Screen": "Tela de configurações",
"Numbering Mapping": "Temporadas deslocadas",
"Show": "Série",
"Shows": "Séries",
"Source Season": "Temporada de origem",
"SrcIndex": "Índice de origem",
"Status": "Status",
"Stay": "Permanecer",
"Stream dispositions": "Disposições do fluxo",
"Stream tags": "Tags do fluxo",
"Streams": "Fluxos",
"SubIndex": "Subíndice",
"Substitute": "Substituir",
"Substitute pattern": "Substituir padrão",
"Title": "Título",
"Type": "Tipo",
"Unable to update selected stream.": "Não foi possível atualizar o fluxo selecionado.",
"Up": "Cima",
"Update Pattern": "Atualizar padrão",
"Updated media tag {tag!r}.": "Tag de mídia {tag!r} atualizada.",
"Updated stream #{index} ({track_type}).": "Fluxo #{index} ({track_type}) atualizado.",
"Value": "Valor",
"Year": "Ano",
"Yes": "Sim",
"add media tag: key='{key}' value='{value}'": "adicionar tag de mídia: chave='{key}' valor='{value}'",
"add {track_type} track: index={index} lang={language}": "adicionar faixa {track_type}: índice={index} idioma={language}",
"attached_pic": "attached_pic",
"attachment": "anexo",
"audio": "áudio",
"captions": "legendas",
"change media tag: key='{key}' value='{value}'": "alterar tag de mídia: chave='{key}' valor='{value}'",
"change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}": "alterar fluxo #{index} ({track_type}:{sub_index}) adicionar disposição={disposition}",
"change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}": "alterar fluxo #{index} ({track_type}:{sub_index}) adicionar chave={key} valor={value}",
"change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}": "alterar fluxo #{index} ({track_type}:{sub_index}) alterar chave={key} valor={value}",
"change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}": "alterar fluxo #{index} ({track_type}:{sub_index}) remover disposição={disposition}",
"change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}": "alterar fluxo #{index} ({track_type}:{sub_index}) remover chave={key} valor={value}",
"clean_effects": "apenas efeitos",
"comment": "comentário",
"default": "padrão",
"dependent": "dependente",
"descriptions": "descrições",
"dub": "dublado",
"for pattern": "para o padrão",
"forced": "forçado",
"from": "de",
"from pattern": "do padrão",
"from show": "da série",
"hearing_impaired": "deficiência auditiva",
"karaoke": "karaokê",
"lyrics": "letra",
"metadata": "metadados",
"non_diegetic": "não diegético",
"original": "original",
"pattern #{id}": "padrão #{id}",
"remove media tag: key='{key}' value='{value}'": "remover tag de mídia: chave='{key}' valor='{value}'",
"remove stream #{index}": "remover fluxo #{index}",
"show #{id}": "série #{id}",
"stereo": "estéreo",
"still_image": "imagem estática",
"sub index": "subíndice",
"subtitle": "legenda",
"timed_thumbnails": "miniaturas temporizadas",
"undefined": "indefinido",
"unknown": "desconhecido",
"video": "vídeo",
"visual_impaired": "deficiência visual"
}
}

361
assets/i18n/ta.json Normal file
View File

@@ -0,0 +1,361 @@
{
"iso_languages": {
"ABKHAZIAN": "அப்காசியன்",
"AFAR": "அஃபர்",
"AFRIKAANS": "ஆப்ரிக்கான்ச்",
"AKAN": "அகான்",
"ALBANIAN": "அல்பேனியன்",
"AMHARIC": "அம்ஆரிக்",
"ARABIC": "அராபிக்",
"ARAGONESE": "அரகோன்ச்",
"ARMENIAN": "அர்மேனியன்",
"ASSAMESE": "அச்சாமி",
"AVARIC": "அவாரிக்",
"AVESTAN": "அவேச்டன்",
"AYMARA": "அய்மாரா",
"AZERBAIJANI": "அசெர்பெய்சானி",
"BAMBARA": "பம்பரா",
"BASHKIR": "பாச்கிர்",
"BASQUE": "பாச்க்",
"BELARUSIAN": "பெலாருசியன்",
"BENGALI": "பெங்காலி",
"BISLAMA": "பிச்லாமா",
"BOKMAL": "Bokmål",
"BOSNIAN": "போச்னியன்",
"BRETON": "ப்ரெடன்",
"BULGARIAN": "பல்கேரியன்",
"BURMESE": "பர்மீசி",
"CATALAN": "Catalan",
"CHAMORRO": "சாமோர்ரோ",
"CHECHEN": "செக்சன்",
"CHICHEWA": "Chichewa",
"CHINESE": "சைனீச்",
"CHURCH_SLAVIC": "Church Slavic",
"CHUVASH": "சுவாச்",
"CORNISH": "கோர்னிச்",
"CORSICAN": "கோர்சிகேன்",
"CREE": "சிரீ",
"CROATIAN": "குரேசியன்",
"CZECH": "செக்",
"DANISH": "டானிச்",
"DIVEHI": "Divehi",
"DUTCH": "Dutch",
"DZONGKHA": "ட்சொங்க்கா",
"ENGLISH": "ஆங்கிலம்",
"ESPERANTO": "எச்பெரான்டொ",
"ESTONIAN": "எச்டோனியன்",
"EWE": "இவ்",
"FAROESE": "ஃபரோச்",
"FIJIAN": "ஃபிசியன்",
"FILIPINO": "Filipino",
"FINNISH": "பின்னிச்",
"FRENCH": "பிரெஞ்சு",
"FULAH": "ஃபுல்லா",
"GALICIAN": "காலிசியன்",
"GANDA": "கான்டா",
"GEORGIAN": "சியார்சியன்",
"GERMAN": "செர்மன்",
"GREEK": "Greek",
"GUARANI": "குர்ரானி",
"GUJARATI": "குசராத்தி",
"HAITIAN": "Haitian",
"HAUSA": "ஔசா",
"HEBREW": "ஈப்ரு",
"HERERO": "இரீரோ",
"HINDI": "இந்தி",
"HIRI_MOTU": "இரி மோட்டு",
"HUNGARIAN": "அங்கேரியன்",
"ICELANDIC": "ஐச்லாண்டிக்",
"IDO": "ஐடூ",
"IGBO": "இக்போ",
"INDONESIAN": "இந்தோனேசியன்",
"INTERLINGUA": "Interlingua",
"INTERLINGUE": "Interlingue",
"INUKTITUT": "இனுடிடட்",
"INUPIAQ": "இனுபைக்யூ",
"IRISH": "ஐரிச்",
"ITALIAN": "இத்தாலியன்",
"JAPANESE": "சப்பானிய",
"JAVANESE": "சவானிச்",
"KALAALLISUT": "Kalaallisut",
"KANNADA": "கன்னடம்",
"KANURI": "கனுரி",
"KASHMIRI": "காச்மீரி",
"KAZAKH": "கசாக்ச்",
"KHMER": "Khmer",
"KIKUYU": "Kikuyu",
"KINYARWANDA": "கின்யார்வான்டா",
"KIRGHIZ": "Kirghiz",
"KOMI": "கோமி",
"KONGO": "காங்கோ",
"KOREAN": "கொரியன்",
"KUANYAMA": "Kuanyama",
"KURDISH": "குர்திச்",
"LAO": "லாவோ",
"LATIN": "லத்தீன்",
"LATVIAN": "லாட்வியன்",
"LIMBURGAN": "Limburgan",
"LINGALA": "லின்காலா",
"LITHUANIAN": "லிதுவேனியன்",
"LUBA_KATANGA": "லூபா-கடான்கா",
"LUXEMBOURGISH": "Luxembourgish",
"MACEDONIAN": "மேசடோனியன்",
"MALAGASY": "மலகாசி",
"MALAY": "மலாய்",
"MALAYALAM": "மலையாளம்",
"MALTESE": "மல்டீச்",
"MANX": "மான்ச்",
"MAORI": "மௌரி",
"MARATHI": "மராத்தி",
"MARSHALLESE": "மார்சலீசீ",
"MONGOLIAN": "மங்கோலியன்",
"NAURU": "நவூரு",
"NAVAJO": "Navajo",
"NDONGA": "நடோன்கா",
"NEPALI": "நேபாலி",
"NORTHERN_SAMI": "கிழக்கு சாமி",
"NORTH_NDEBELE": "North Ndebele",
"NORWEGIAN": "நார்வேசியன்",
"NORWEGIAN_NYNORSK": "Nynorsk",
"OCCITAN": "Occitan",
"OJIBWA": "ஒசிப்வா",
"ORIYA": "ஒரியா",
"OROMO": "ஒரோமோ",
"OSSETIAN": "Ossetian",
"PALI": "பாலி",
"PANJABI": "Panjabi",
"PERSIAN": "பெர்சியன்",
"POLISH": "போலிச்",
"PORTUGUESE": "போர்த்துக்கீசிய",
"PUSHTO": "Pushto",
"QUECHUA": "க்யுசோ",
"ROMANIAN": "Romanian",
"ROMANSH": "ரோமான்ச்ச்",
"RUNDI": "ருண்டி",
"RUSSIAN": "ரச்யன்",
"SAMOAN": "சாமோயன்",
"SANGO": "சான்ங்கோ",
"SANSKRIT": "சான்ச்கிரிட்",
"SARDINIAN": "சார்டினியன்",
"SCOTTISH_GAELIC": "Scottish Gaelic",
"SERBIAN": "செர்பியன்",
"SHONA": "சோனா",
"SICHUAN_YI": "Sichuan Yi",
"SINDHI": "சிந்தி",
"SINHALA": "Sinhala",
"SLOVAK": "சுலோவாக்",
"SLOVENIAN": "ச்லோவெனியன்",
"SOMALI": "சோமாலி",
"SOUTHERN_SOTHO": "Southern Sotho",
"SOUTH_NDEBELE": "South Ndebele",
"SPANISH": "Spanish",
"SUNDANESE": "சூடானீச்",
"SWAHILI": "ச்வாஇலி",
"SWATI": "ச்வாதி",
"SWEDISH": "சுவீடிச்",
"TAGALOG": "டங்லாக்",
"TAHITIAN": "தஇதியன்",
"TAJIK": "தாசிக்",
"TAMIL": "தமிழ்",
"TATAR": "டாட்டர்",
"TELUGU": "தெலுங்கு",
"THAI": "தாய்",
"TIBETAN": "திபெத்திய",
"TIGRINYA": "தைக்ரின்யா",
"TONGA": "Tonga",
"TSONGA": "ட்சாங்கோ",
"TSWANA": "ட்ச்வனா",
"TURKISH": "துருக்கி",
"TURKMEN": "டர்க்மென்",
"TWI": "டிவி",
"UIGHUR": "Uighur",
"UKRAINIAN": "உக்ரெனியன்",
"UNDEFINED": "undefined",
"URDU": "உருது",
"UZBEK": "உச்பெக்",
"VENDA": "வேண்டா",
"VIETNAMESE": "வியட்னாம்",
"VOLAPUK": "வோலாபுக்",
"WALLOON": "வாலூன்",
"WELSH": "வெல்ச்",
"WESTERN_FRISIAN": "மேற்கு ஃபிரிசியன்",
"WOLOF": "ஓலோஃப்",
"XHOSA": "சோசா",
"YIDDISH": "இட்டிச்",
"YORUBA": "யோருபா",
"ZHUANG": "Zhuang",
"ZULU": "சுலு"
},
"phrases": {
"5.0(side)": "5.0(side)",
"5.1(side)": "5.1(side)",
"6.1": "6.1",
"6ch": "6ch",
"7.1": "7.1",
"<New show>": "<புதிய தொடர்>",
"Add": "சேர்",
"Add Pattern": "வடிவத்தை சேர்",
"Apply": "பயன்படுத்து",
"Apply failed: {error}": "பயன்படுத்தல் தோல்வியடைந்தது: {error}",
"Are you sure to delete the following filename pattern?": "பின்வரும் கோப்பு பெயர் வடிவத்தை நீக்க விரும்புகிறீர்களா?",
"Are you sure to delete the following shifted season?": "பின்வரும் மாற்றிய சீசனை நீக்க விரும்புகிறீர்களா?",
"Are you sure to delete the following show?": "பின்வரும் தொடரை நீக்க விரும்புகிறீர்களா?",
"Are you sure to delete the following {track_type} track?": "பின்வரும் {track_type} ஸ்ட்ரீமை நீக்க விரும்புகிறீர்களா?",
"Are you sure to delete this tag?": "இந்த குறிச்சொல்லை நீக்க விரும்புகிறீர்களா?",
"Audio Layout": "ஒலி அமைப்பு",
"Back": "பின்",
"Cancel": "ரத்து",
"Cannot add another stream with disposition flag 'default' or 'forced' set": "'default' அல்லது 'forced' disposition கொடி அமைந்த மற்றொரு ஸ்ட்ரீமை சேர்க்க முடியாது",
"Changes applied and file reloaded.": "மாற்றங்கள் பயன்படுத்தப்பட்டு கோப்பு மீளேற்றப்பட்டது.",
"Cleanup": "சுத்திகரிப்பு",
"Cleanup disabled.": "சுத்திகரிப்பு முடக்கப்பட்டது.",
"Cleanup enabled.": "சுத்திகரிப்பு இயக்கப்பட்டது.",
"Codec": "கோடெக்",
"Continuing edit session.": "திருத்த அமர்வு தொடர்கிறது.",
"Default": "இயல்புநிலை",
"Delete": "நீக்கு",
"Delete Show": "தொடரை நீக்கு",
"Deleted media tag {tag!r}.": "மீடியா குறிச்சொல் {tag!r} நீக்கப்பட்டது.",
"Differences": "வேறுபாடுகள்",
"Differences (file->db/output)": "வேறுபாடுகள் (கோப்பு->DB/வெளியீடு)",
"Discard": "கைவிடு",
"Discard pending metadata changes and quit?": "நிலுவையில் உள்ள மெட்டாடேட்டா மாற்றங்களை கைவிட்டு வெளியேறவா?",
"Discard pending metadata changes and reload the file state?": "நிலுவையில் உள்ள மெட்டாடேட்டா மாற்றங்களை கைவிட்டு கோப்பு நிலையை மீளேற்றவா?",
"Down": "கீழ்",
"Dry-run: would rewrite via temporary file {target_path}": "Dry-run: தற்காலிக கோப்பு {target_path} வழியாக மறுஎழுதப்படும்",
"Edit": "திருத்து",
"Edit Pattern": "வடிவத்தை திருத்து",
"Edit Show": "தொடரை திருத்து",
"Edit filename pattern": "கோப்பு பெயர் வடிவத்தை திருத்து",
"Edit shifted season": "மாற்றிய சீசனை திருத்து",
"Edit stream": "ஸ்ட்ரீமை திருத்து",
"Episode Offset": "அத்தியாய இடச்சரிவு",
"Episode offset": "அத்தியாய இடச்சரிவு",
"File": "கோப்பு",
"File patterns": "கோப்பு வடிவங்கள்",
"First Episode": "முதல் அத்தியாயம்",
"First episode": "முதல் அத்தியாயம்",
"Forced": "கட்டாயம்",
"Help": "உதவி",
"Help Screen": "உதவி திரை",
"ID": "அடையாளம்",
"Identify": "அடையாளம் காட்டு",
"Index": "சுட்டி",
"Index / Subindex": "சுட்டி / துணைச்சுட்டி",
"Index Episode Digits": "அத்தியாய சுட்டி இலக்கங்கள்",
"Index Season Digits": "சீசன் சுட்டி இலக்கங்கள்",
"Indicator Edisode Digits": "அத்தியாய குறியீட்டு இலக்கங்கள்",
"Indicator Season Digits": "சீசன் குறியீட்டு இலக்கங்கள்",
"Keep Editing": "திருத்தலை தொடரு",
"Keeping pending changes.": "நிலுவையில் உள்ள மாற்றங்கள் வைக்கப்படுகின்றன.",
"Key": "சாவி",
"Language": "மொழி",
"Last Episode": "கடைசி அத்தியாயம்",
"Last episode": "கடைசி அத்தியாயம்",
"Layout": "அமைப்பு",
"Media Tags": "மீடியா குறிச்சொற்கள்",
"More than one default audio stream detected and no prompt set": "ஒருக்கும் மேற்பட்ட இயல்புநிலை ஒலி ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை",
"More than one default audio stream detected! Please select stream": "ஒருக்கும் மேற்பட்ட இயல்புநிலை ஒலி ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்",
"More than one default subtitle stream detected and no prompt set": "ஒருக்கும் மேற்பட்ட இயல்புநிலை வசன ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை",
"More than one default subtitle stream detected! Please select stream": "ஒருக்கும் மேற்பட்ட இயல்புநிலை வசன ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்",
"More than one default video stream detected and no prompt set": "ஒருக்கும் மேற்பட்ட இயல்புநிலை வீடியோ ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை",
"More than one default video stream detected! Please select stream": "ஒருக்கும் மேற்பட்ட இயல்புநிலை வீடியோ ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்",
"More than one forced audio stream detected and no prompt set": "ஒருக்கும் மேற்பட்ட கட்டாய ஒலி ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை",
"More than one forced audio stream detected! Please select stream": "ஒருக்கும் மேற்பட்ட கட்டாய ஒலி ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்",
"More than one forced subtitle stream detected and no prompt set": "ஒருக்கும் மேற்பட்ட கட்டாய வசன ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை",
"More than one forced subtitle stream detected! Please select stream": "ஒருக்கும் மேற்பட்ட கட்டாய வசன ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்",
"More than one forced video stream detected and no prompt set": "ஒருக்கும் மேற்பட்ட கட்டாய வீடியோ ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை",
"More than one forced video stream detected! Please select stream": "ஒருக்கும் மேற்பட்ட கட்டாய வீடியோ ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்",
"Name": "பெயர்",
"New Pattern": "புதிய வடிவம்",
"New Show": "புதிய தொடர்",
"New filename pattern": "புதிய கோப்பு பெயர் வடிவம்",
"New shifted season": "புதிய மாற்றிய சீசன்",
"New stream": "புதிய ஸ்ட்ரீம்",
"No": "இல்லை",
"No changes to apply.": "பயன்படுத்த மாற்றங்கள் இல்லை.",
"No changes to revert.": "மீட்டெடுக்க மாற்றங்கள் இல்லை.",
"Normalization disabled.": "சீரமைப்பு முடக்கப்பட்டது.",
"Normalization enabled.": "சீரமைப்பு இயக்கப்பட்டது.",
"Normalize": "சீரமை",
"Notes": "குறிப்புகள்",
"Pattern": "வடிவம்",
"Planned Changes (file->edited output)": "திட்டமிட்ட மாற்றங்கள் (கோப்பு->திருத்திய வெளியீடு)",
"Quality": "தரம்",
"Quit": "வெளியேறு",
"Remove Pattern": "வடிவத்தை நீக்கு",
"Revert": "மீட்டு",
"Reverted pending changes.": "நிலுவையில் உள்ள மாற்றங்கள் மீட்டெடுக்கப்பட்டன.",
"Save": "சேமி",
"Season Offset": "சீசன் இடச்சரிவு",
"Select a stream first.": "முதலில் ஒரு ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்.",
"Set Default": "இயல்புநிலையாக அமை",
"Set Forced": "கட்டாயமாக அமை",
"Settings Screen": "அமைப்புகள் திரை",
"Numbering Mapping": "மாற்றிய சீசன்கள்",
"Show": "தொடர்",
"Shows": "தொடர்கள்",
"Source Season": "மூல சீசன்",
"SrcIndex": "மூலச் சுட்டி",
"Status": "நிலை",
"Stay": "இரு",
"Stream dispositions": "ஸ்ட்ரீம் disposition-கள்",
"Stream tags": "ஸ்ட்ரீம் குறிச்சொற்கள்",
"Streams": "ஸ்ட்ரீம்கள்",
"SubIndex": "துணைச்சுட்டி",
"Substitute": "மாற்று",
"Substitute pattern": "வடிவத்தை மாற்று",
"Title": "தலைப்பு",
"Type": "வகை",
"Unable to update selected stream.": "தேர்ந்தெடுக்கப்பட்ட ஸ்ட்ரீமைப் புதுப்பிக்க முடியவில்லை.",
"Up": "மேல்",
"Update Pattern": "வடிவத்தை புதுப்பி",
"Updated media tag {tag!r}.": "மீடியா குறிச்சொல் {tag!r} புதுப்பிக்கப்பட்டது.",
"Updated stream #{index} ({track_type}).": "ஸ்ட்ரீம் #{index} ({track_type}) புதுப்பிக்கப்பட்டது.",
"Value": "மதிப்பு",
"Year": "ஆண்டு",
"Yes": "ஆம்",
"add media tag: key='{key}' value='{value}'": "மீடியா குறிச்சொல் சேர்: key='{key}' value='{value}'",
"add {track_type} track: index={index} lang={language}": "{track_type} ஸ்ட்ரீம் சேர்: index={index} lang={language}",
"attached_pic": "attached_pic",
"attachment": "இணைப்பு",
"audio": "ஒலி",
"captions": "உரைப்பதிவுகள்",
"change media tag: key='{key}' value='{value}'": "மீடியா குறிச்சொல் மாற்று: key='{key}' value='{value}'",
"change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}": "ஸ்ட்ரீம் #{index} ({track_type}:{sub_index}) disposition சேர்={disposition}",
"change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}": "ஸ்ட்ரீம் #{index} ({track_type}:{sub_index}) key சேர்={key} value={value}",
"change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}": "ஸ்ட்ரீம் #{index} ({track_type}:{sub_index}) key மாற்று={key} value={value}",
"change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}": "ஸ்ட்ரீம் #{index} ({track_type}:{sub_index}) disposition நீக்கு={disposition}",
"change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}": "ஸ்ட்ரீம் #{index} ({track_type}:{sub_index}) key நீக்கு={key} value={value}",
"clean_effects": "ஒலி விளைவுகள் மட்டும்",
"comment": "கருத்துரை",
"default": "இயல்புநிலை",
"dependent": "சார்ந்த",
"descriptions": "விளக்கங்கள்",
"dub": "டப்",
"for pattern": "வடிவத்திற்கு",
"forced": "கட்டாயம்",
"from": "இருந்து",
"from pattern": "வடிவத்திலிருந்து",
"from show": "தொடரிலிருந்து",
"hearing_impaired": "கேள்வித்திறன் குறைபாடு",
"karaoke": "கரோக்கே",
"lyrics": "பாடல்வரிகள்",
"metadata": "மெட்டாடேட்டா",
"non_diegetic": "அல்லாத-டைஜெடிக்",
"original": "மூலம்",
"pattern #{id}": "வடிவு #{id}",
"remove media tag: key='{key}' value='{value}'": "மீடியா குறிச்சொல் நீக்கு: key='{key}' value='{value}'",
"remove stream #{index}": "ஸ்ட்ரீம் #{index} நீக்கு",
"show #{id}": "தொடர் #{id}",
"stereo": "ஸ்டீரியோ",
"still_image": "நிலைப்படம்",
"sub index": "துணைச்சுட்டி",
"subtitle": "வசனம்",
"timed_thumbnails": "நேர நிர்ணய சிறுபடங்கள்",
"undefined": "வரையறுக்கப்படாத",
"unknown": "தெரியாத",
"video": "வீடியோ",
"visual_impaired": "பார்வைத்திறன் குறைபாடு"
}
}

170
docs/file_formats.md Normal file
View File

@@ -0,0 +1,170 @@
# File Formats
This document captures source-file-format notes that complement the normative
requirements in `requirements/source_file_formats.md`.
The first documented format is a Matroska source that carries styled ASS/SSA
subtitle streams together with embedded font attachments.
## Styled ASS In Matroska With Embedded Fonts
These files are typically `.mkv` releases where subtitle rendering quality
depends on keeping both parts of the subtitle package together:
- one or more subtitle streams with codec `ass`
- one or more attachment streams that embed font files used by those subtitles
This matters because ASS subtitles are not plain text subtitles in the narrow
WebVTT sense. They can carry layout, styling, positioning, karaoke, signs, and
other typesetting effects. If the matching embedded fonts are lost, consumers
can still see subtitle text but the intended styling and sometimes glyph
coverage can be degraded.
For FFX this format is special because the ASS subtitle streams should remain
normally editable and mappable, while the related font attachments should be
transported unchanged.
## Observed Sample
Assessment date: `2026-04-17`
Observed sample file:
- `tests/assets/boruto_s01e283_ssa.mkv`
Commands used for assessment:
```bash
ffprobe tests/assets/boruto_s01e283_ssa.mkv
ffprobe -hide_banner -show_format -show_streams -of json tests/assets/boruto_s01e283_ssa.mkv
```
Observed stream layout:
| Stream index | Kind | Key details |
| --- | --- | --- |
| `0` | video | `codec_name=h264` |
| `1` | audio | `codec_name=aac`, `language=jpn` |
| `2` | subtitle | `codec_name=ass`, `language=ger`, default |
| `3` | subtitle | `codec_name=ass`, `language=eng` |
| `4`-`13` | attachment | `tags.mimetype=font/ttf`, `.ttf` filenames |
Observed attachment filenames:
- `AmazonEmberTanuki-Italic.ttf`
- `AmazonEmberTanuki-Regular.ttf`
- `Arial.ttf`
- `Arial Bold.ttf`
- `Georgia.ttf`
- `Times New Roman.ttf`
- `Times New Roman Bold.ttf`
- `Trebuchet MS.ttf`
- `Verdana.ttf`
- `Verdana Bold.ttf`
Important probe behavior from the real sample:
- Plain `ffprobe` lists the font streams as `Attachment: none`.
- Plain `ffprobe` also prints warnings such as `Could not find codec
parameters for stream 4 (Attachment: none): unknown codec` and later
`Unsupported codec with id 0 for input stream ...`.
- The JSON produced by `FileProperties.FFPROBE_COMMAND_TOKENS`
(`ffprobe -hide_banner -show_format -show_streams -of json`) still exposes
the attachment streams clearly through `codec_type="attachment"` and the
attachment tags.
- In that JSON, the attachment streams do not expose `codec_name`.
This last point is important for FFX: robust detection must not depend on
attachment `codec_name` being present.
## Detection Guidance
Current known indicators for this format are:
- one or more subtitle streams with `codec_type="subtitle"` and
`codec_name="ass"`
- one or more attachment streams with `codec_type="attachment"`
- attachment tags that identify embedded fonts, especially
`tags.mimetype="font/ttf"`
- attachment filenames that end in `.ttf`
The pattern can vary. FFX should therefore treat the above as a cluster of
signals rather than an exact signature tied to one file.
Inference from the observed sample plus FFmpeg documentation:
- MIME matching should not be limited to `font/ttf` alone.
- The Boruto sample uses `font/ttf`.
- FFmpeg's Matroska attachment example uses
`mimetype=application/x-truetype-font` for a `.ttf` attachment.
- Detection should therefore normalize multiple TTF-like MIME values rather
than depend on a single exact string.
## Processing Expectations In FFX
The format-specific requirements live in
`requirements/source_file_formats.md`. In practical terms, FFX should:
- recognize the ASS-plus-font-attachment pattern even when attachment probe
data is incomplete
- tell the operator that the pattern was detected and that special handling is
being used
- reject sidecar subtitle import for such sources, because converting or
replacing these subtitle tracks with ordinary external text subtitles would
break the intended subtitle package
- continue to allow normal manipulation of the ASS subtitle tracks themselves
- preserve the font attachment streams unchanged
## FFmpeg Notes
Relevant FFmpeg documentation confirms several behaviors that line up with
FFX's needs:
- FFmpeg documents `-attach` as adding an attachment stream to the output, and
explicitly names Matroska fonts used in subtitle rendering as an example.
- FFmpeg documents attachment streams as regular streams that are created after
the mapped media streams.
- FFmpeg documents `-dump_attachment` for extracting attachment streams, which
is useful for debugging or validating a source file's embedded fonts.
- FFmpeg's Matroska example requires a `mimetype` metadata tag for attached
fonts, which is consistent with using attachment tags as detection signals.
- FFmpeg also notes that attachments are implemented as codec extradata. That
helps explain why probe output for attachment streams can look different from
ordinary audio, video, and subtitle streams.
Implication for FFX:
- Attachment preservation is not an optional cosmetic feature for this format.
It is part of preserving the subtitle package correctly.
## Jellyfin Notes
Jellyfin's documentation also supports keeping this format intact:
- Jellyfin's subtitle compatibility table lists `ASS/SSA` as supported in
`MKV` and not supported in `MP4`.
- Jellyfin notes that when subtitles must be transcoded, they are either
converted to a supported format or burned into the video, and burning them in
is the most CPU-intensive path.
- Jellyfin's subtitle-extraction example for `SSA/ASS` first dumps attachment
streams and then extracts the ASS subtitle stream, which reflects the real
relationship between ASS subtitles and embedded fonts in MKV releases.
- Jellyfin's font documentation says text-based subtitles require fonts to
render properly.
- Jellyfin's configuration documentation says the web client uses configured
fallback fonts for ASS subtitles when other fonts such as MKV attachments or
client-side fonts are not available.
Inference from the Jellyfin compatibility tables:
- Keeping this subtitle format in Matroska is the safest interoperability
choice for Jellyfin consumers.
- Converting the subtitle payload to WebVTT would lose styled ASS behavior.
- Dropping the attachment streams would force client or fallback font
substitution and can change appearance or glyph coverage.
## References
- FFmpeg documentation: https://ffmpeg.org/ffmpeg.html
- Jellyfin codec support: https://jellyfin.org/docs/general/clients/codec-support/
- Jellyfin configuration and fonts: https://jellyfin.org/docs/general/administration/configuration/

View File

@@ -1,28 +0,0 @@
# Lean Interface Iteration
Rule set name: `lean-interface-iteration`
Rule set ID: `LII`
Status: optional, prompt-activated only
Trigger examples:
- `Apply the lean-interface-iteration rules.`
- `Apply LII rules.`
LII-0001: Apply this rule set only when it is explicitly requested in the prompt.
LII-0002: The target of work under this rule set is the iterated product state for the addressed iteration only.
LII-0003: Optimize the addressed interface toward the leanest and least complex model that still satisfies the iteration order.
LII-0004: Backward compatibility, legacy aliases, and compatibility shims are not required unless the prompt explicitly asks to preserve them.
LII-0005: Prefer one authoritative interface over multiple overlapping parameters, flags, or naming variants.
LII-0006: Remove or avoid transitional interface layers when they are not required by the addressed iteration order.
LII-0007: Update affected tests, guidance, requirements, and documentation so they describe the simplified interface model rather than a mixed legacy-and-new model.
LII-0008: Never change behavior, interfaces, or surrounding areas that are not addressed by the current iteration order.

View File

@@ -1,56 +0,0 @@
# Preparation Script Design
Rule set name: `preparation-script-design`
Rule set ID: `PSD`
Status: optional, prompt-activated only
Trigger examples:
- `Apply the preparation-script-design rules.`
- `Apply PSD rules.`
PSD-0001: Apply this rule set only when it is explicitly requested in the prompt.
PSD-0002: Use this rule set for scripts whose purpose is to prepare, verify, or expose a local development or automation environment rather than to perform product runtime behavior.
PSD-0003: Keep a preparation script focused on environment readiness, dependency installation, local helper exposure, and clear verification output; do not mix unrelated product logic into the script.
PSD-0004: Design the script to be idempotent so repeated runs converge on the same prepared state without unnecessary reinstallation or destructive side effects.
PSD-0005: Provide a verification-only mode such as `--check` that reports readiness without installing, modifying, or creating dependencies.
PSD-0006: Separate component checks from installation steps so the script can report what is missing before or after attempted remediation.
PSD-0007: Group required capabilities into clear purpose-oriented sections such as support toolchains, local package bundles, generated environment helpers, or other relevant readiness areas instead of presenting one undifferentiated dependency list.
PSD-0008: Prefer explicit per-component check helpers over opaque one-shot checks so failures remain traceable and easy to extend.
PSD-0009: Generate or update environment helper files only when they provide a stable, reusable way to expose repo-local or workspace-local tools, paths, or environment variables.
PSD-0010: Generated environment helper files shall be safe to source multiple times and should avoid duplicating path entries or clobbering unrelated user environment state.
PSD-0011: When a preparation flow seeds optional user-owned files such as config templates, do so non-destructively by creating them only when absent unless the prompt explicitly requests overwrite behavior.
PSD-0012: Report status in a concise scan-friendly line format of the shape `[status] Label: detail`, where the label names the checked component and the detail string stays short and specific.
PSD-0013: Prefer a small canonical status vocabulary in those report lines, with `ok` for satisfied checks, `warn` for non-blocking gaps, and a failure status such as `failed` for blocking or unsuccessful states.
PSD-0014: When a preparation script uses terminal colors in its status output, apply a consistent severity mapping so `ok` is green, `warn` is yellow, and all other status levels are red.
PSD-0015: In bracketed status markers such as `[ok]` or `[warn]`, keep the square brackets uncolored and apply the severity color only to the inner status text.
PSD-0016: Colorized status output shall degrade safely in non-terminal or non-color contexts so the script remains readable and automation-friendly without ANSI support.
PSD-0017: End with an explicit readiness conclusion that distinguishes between successful preparation, incomplete prerequisites, and failed installation attempts.
PSD-0018: Installation logic should use the narrowest supported platform-specific package-manager actions necessary for the declared scope and should fail clearly when no supported installation path is available.
PSD-0019: Treat repo-local helper tooling and local package installation boundaries explicitly rather than assuming global installs, especially when the prepared environment is intended to be reproducible.
PSD-0020: Keep the script suitable for both interactive local developer use and non-interactive automation checks by avoiding prompts during normal execution unless the prompt explicitly requires interactivity.
PSD-0021: When a script depends on generated helper files or adjacent validation helpers, update those supporting files only as needed to keep the preparation flow coherent and usable.
PSD-0022: Verify shell syntax after changes and, when feasible, run a dry readiness check so the resulting preparation flow is validated rather than only written.

View File

@@ -1,13 +1,13 @@
[project]
name = "ffx"
description = "FFX recoding and metadata managing tool"
version = "0.2.4"
version = "0.4.2"
license = {file = "LICENSE.md"}
dependencies = [
"requests",
"jinja2",
"click",
"textual",
"textual>=8.0",
"sqlalchemy",
]
readme = {file = "README.md", content-type = "text/markdown"}

View File

@@ -1,98 +0,0 @@
# Architecture
## Architecture Goals
- Keep the tool small, local, and easy to reason about.
- Separate media inspection, stored normalization rules, and conversion execution clearly enough that users can inspect and adjust behavior.
- Favor explicit local state and deterministic rule application over opaque automation.
- Make external runtime dependencies and platform assumptions visible.
## System Context
- Primary actors:
- Local operator running the CLI.
- Local operator using the Textual TUI to inspect files and maintain rules.
- External systems:
- `ffprobe` for media introspection.
- `ffmpeg` for conversion and extraction.
- TMDB API for optional show and episode metadata.
- Local filesystem for source media, generated outputs, subtitles, logs, config, and database files.
- Data entering the system:
- Media container and stream metadata from source files.
- Regex patterns and per-show normalization rules entered in the TUI.
- Optional config values from `~/.local/etc/ffx.json`.
- Optional TMDB identifiers and CLI overrides.
- Optional external subtitle files.
- Data leaving the system:
- Normalized output media files.
- Extracted stream files from unmux operations.
- SQLite rows representing shows, patterns, tracks, tags, shifted seasons, and properties.
- Local log output and console messages.
## High-Level Building Blocks
- Frontend, CLI, API, or worker:
- A Click-based CLI in [`src/ffx/cli.py`](/home/osgw/.local/src/codex/ffx/src/ffx/cli.py), exposed as the `ffx` command and via `python -m ffx`, including lightweight maintenance wrappers for bundle setup, workstation preparation, and upgrade tasks.
- A Textual terminal UI rooted in [`src/ffx/ffx_app.py`](/home/osgw/.local/src/codex/ffx/src/ffx/ffx_app.py) with screens for shows, patterns, file inspection, tracks, tags, and shifted seasons.
- Core business logic:
- Descriptor objects model media files, shows, and tracks.
- Controllers encapsulate CRUD operations and workflow orchestration for shows, patterns, tags, tracks, season shifts, configuration, and conversion.
- `MediaDescriptorChangeSet` computes differences between a file and its stored target schema to drive metadata and disposition updates.
- File inspection caches combined `ffprobe` data and crop-detection results per source and sampling window within one process to avoid repeated subprocess work.
- Storage:
- SQLite via SQLAlchemy ORM, with schema rooted in shows, patterns, tracks, media tags, track tags, shifted seasons, and generic properties.
- Ordered schema migrations are loaded dynamically from per-version-step modules under [`src/ffx/model/migration/`](/home/osgw/.local/src/codex/ffx/src/ffx/model/migration/).
- A configuration JSON file supplies optional path, metadata-filtering, and filename-template settings.
- Integration adapters:
- Process execution wrapper for `ffmpeg`, `ffprobe`, `nice`, and `cpulimit`, with explicit disabled states for niceness and CPU limiting, support for both absolute `cpulimit` values and machine-wide percent input, and a combined `cpulimit -- nice -n ... <command>` execution shape when both limits are configured.
- HTTP adapter for TMDB via `requests`.
## Data And Interface Notes
- Key entities or records:
- `Show`: canonical TV show metadata plus digit-formatting rules, optional show-level notes, and an optional show-level encoding-quality fallback.
- `Pattern`: regex rule tying filenames to one show and one target media schema.
- `Track` and `TrackTag`: persisted target stream records, codec, dispositions, audio layout, and stream-level tags. Detailed source-to-target mapping rules live in `requirements/subtrack_mapping.md`.
- `MediaTag`: persisted container-level metadata for a pattern.
- `ShiftedSeason`: mapping from source numbering ranges to adjusted season and episode numbers, owned either by a show as fallback or by a pattern as override.
- `Property`: internal key-value storage currently used for database versioning.
- External interfaces:
- CLI commands for conversion, inspection, extraction, and crop detection.
- TUI workflows for rule authoring and rule maintenance.
- Environment variable `TMDB_API_KEY` for TMDB access.
- Config keys `databasePath`, `logDirectory`, and `outputFilenameTemplate`, plus optional metadata-filter rules.
- Validation rules:
- Only supported media-file extensions are accepted for conversion.
- Stored database version must either match the runtime-required version already or have a supported sequential migration path to it.
- A normalized descriptor may have at most one default and one forced stream per relevant track type.
- Shifted-season ranges are intended not to overlap within the same owner scope and season, and runtime resolution prefers pattern-owned matches over show-owned matches.
- TMDB lookups require a show ID and season and episode numbers.
- Error-handling approach:
- User-facing operational failures are raised as `click.ClickException` or warnings.
- Ambiguous default and forced stream states trigger prompts unless `--no-prompt` is set, in which case the command fails fast.
- External-process failures and invalid media are surfaced through logs and command errors rather than retries, except for TMDB rate-limit retries.
## Deployment And Operations
- Runtime environment:
- Local Python environment with the package installed and `ffmpeg`, `ffprobe`, `nice`, and `cpulimit` available on `PATH`.
- Deployment shape:
- Single-process command execution on demand; no daemon, queue, or network service of its own.
- Secrets and configuration handling:
- TMDB secret is read from `TMDB_API_KEY`.
- User config is read from `~/.local/etc/ffx.json`.
- Database path may also be overridden per command via `--database-file`.
- Logging and monitoring approach:
- File and console logging configured per invocation.
- Default log file path is `~/.local/var/log/ffx.log`.
- No dedicated monitoring integration is present.
## Open Technical Questions
- Question: Should Linux-specific assumptions such as `/dev/null`, `nice`, `cpulimit`, and `~/.local` remain part of the supported-platform contract?
- Risk: Portability and operational behavior are underspecified for non-Linux environments.
- Next decision needed: Either document Linux-like systems as the official support boundary or refactor the process and path handling for broader portability.
- Question: Should placeholder TUI surfaces such as settings and help become part of the required product surface or stay explicitly out of scope?
- Risk: The UI appears broader than the actually finished feature set.
- Next decision needed: Either remove or complete placeholder screens and update requirements accordingly.

View File

@@ -1,68 +0,0 @@
# Pattern Management
This file defines the behavioral contract for managing shows, patterns, and
pattern-backed filename matching.
Primary source: actual tool code in `src/ffx/`.
Secondary source: operator intent captured in task discussion.
## Scope
- The show, pattern, and track hierarchy stored in SQLite.
- The role of a pattern as a reusable normalization definition for related media files.
- Filename-driven assignment of a scanned media file to one show through one matching pattern.
- Duplicate-match handling when more than one pattern matches the same filename.
## Terms
- `show`: logical series identity such as one TV show entry in the database.
- `pattern`: regex-backed normalization definition attached to one show.
- `track`: one persisted target-track definition attached to one pattern.
- `scanned media file`: one source file currently being inspected or converted.
- `duplicate pattern match`: a filename state where more than one stored pattern matches the same scanned media file.
- `pattern-backed target schema`: the combination of one pattern's stored media tags and stored track definitions.
## Rules
- `PATTERN_MANAGEMENT-0001`: The domain model shall treat a show as the parent entity for patterns that describe distinct release families or normalization schemas for that show. A show may temporarily exist without patterns during editing or initial TUI creation.
- `PATTERN_MANAGEMENT-0002`: Each persisted pattern shall belong to exactly one show.
- `PATTERN_MANAGEMENT-0003`: The domain model shall treat a pattern as the reusable normalization definition for a series of media files expected to share the same internal track layout and materially similar stream and container metadata.
- `PATTERN_MANAGEMENT-0004`: Each persisted track definition shall belong to exactly one pattern.
- `PATTERN_MANAGEMENT-0005`: A pattern may also carry pattern-level media tags. The pattern's media tags plus its track definitions together form the pattern-backed target schema.
- `PATTERN_MANAGEMENT-0006`: A scanned media file shall resolve to at most one pattern and therefore at most one show.
- `PATTERN_MANAGEMENT-0007`: If no pattern matches a filename, the file shall remain unmatched rather than being assigned implicitly.
- `PATTERN_MANAGEMENT-0008`: If more than one pattern matches the same filename, the system shall raise a duplicate pattern match error instead of silently selecting one.
- `PATTERN_MANAGEMENT-0009`: Duplicate-match detection shall apply regardless of whether the competing patterns belong to the same show or to different shows.
- `PATTERN_MANAGEMENT-0010`: Exact duplicate pattern definitions for the same show should not create multiple persisted pattern rows.
- `PATTERN_MANAGEMENT-0011`: A persisted pattern shall define one or more tracks. Creating or retaining a zero-track pattern in the database is invalid managed state and shall be prohibited.
- `PATTERN_MANAGEMENT-0012`: A show may exist without patterns as an intermediate editing state, for example when a user creates the show first in the TUI and adds patterns later.
- `PATTERN_MANAGEMENT-0013`: Operator-facing pattern management should expose the owning show, regex pattern, stored track set, and stored media-tag set so a user can reason about matching and normalization behavior.
- `PATTERN_MANAGEMENT-0014`: Matching semantics shall be deterministic and documented. Implicit "last matching pattern wins" behavior is not acceptable released behavior.
## Acceptance
- A filename that matches exactly one pattern yields one matched pattern and one show identity.
- A filename that matches no pattern yields no matched pattern and an unmatched state.
- A filename that matches more than one pattern yields an explicit duplicate-match error.
- A pattern-backed target schema can be reconstructed from one pattern's stored media tags and stored track definitions.
- A show may be stored before any patterns are attached to it.
- A pattern cannot be stored or retained as a valid managed pattern unless at least one track is defined for it.
- Pattern-backed conversion never proceeds with two competing matching patterns for the same input filename.
## Current Code Fit
- `src/ffx/model/show.py` implements a one-to-many `Show -> Pattern` relationship.
- `src/ffx/model/pattern.py` implements `Pattern.show_id`, a one-to-many `Pattern -> Track` relationship, a one-to-many `Pattern -> MediaTag` relationship, and a unique `(show_id, pattern)` constraint for freshly created databases.
- `src/ffx/model/track.py` implements `Track.pattern_id`, so each persisted track belongs to one pattern.
- `src/ffx/model/pattern.py` reconstructs a pattern-backed target schema through `Pattern.getMediaDescriptor(...)`, combining stored media tags and stored tracks.
- `src/ffx/file_properties.py` assumes a scanned file resolves to at most one pattern, because it stores only one `self.__pattern` and derives one `show_id` from it.
- `src/ffx/pattern_controller.py` prevents exact duplicate `(show_id, pattern)` definitions during create and update flows, and it refreshes cached compiled regexes when stored pattern expressions change.
- `src/ffx/pattern_controller.py` now complies with duplicate-match safety. `matchFilename(...)` scans deterministically, returns exactly one match, returns `{}` for no match, and raises an explicit duplicate-pattern-match error when more than one pattern matches the same filename.
- The current persistence layer already aligns with the intended empty-show workflow because a show can exist without patterns.
- New pattern creation and schema replacement flows now require at least one track, and `TrackController.deleteTrack(...)` prevents deleting the last persisted track from a pattern.
- Trackless legacy rows can still exist in preexisting databases, but matching now rejects them explicitly instead of letting them participate silently.
## Risks
- The intended "release family" meaning of a pattern is a domain assumption, not something the code verifies automatically across all files matching that pattern.
- Preexisting databases created before the newer validation rules may still contain invalid rows, so upgrade and cleanup paths should continue to treat explicit validation failures as recoverable operator signals.

View File

@@ -1,124 +0,0 @@
## Purpose And Scope
- Project name: FFX
- User problem: TV episode files from mixed sources arrive with inconsistent codecs, stream metadata, subtitle layouts, season and episode numbering, and output filenames, which makes them awkward to archive and use in media-player applications.
- Target users: Individual operators curating a local TV media library on a workstation, especially users willing to define normalization rules per show.
- Success outcome: A user can inspect source files, define reusable show and pattern rules, and produce output files whose streams, metadata, and filenames follow a predictable schema for web playback and library import.
- Out of scope:
- Multi-user or hosted service workflows.
- General movie-library management.
- Distributed transcoding or remote job orchestration.
- Broad media-server administration beyond file preparation.
## Required Product
- Deliverable type: Installable Python command-line application with a Textual terminal UI for inspection and rule editing.
- Core capabilities:
- Maintain an SQLite-backed database of shows, filename-matching patterns, per-pattern stream layouts and metadata tags, and optional season-shift rules.
- Inspect existing media files through `ffprobe` and compare discovered stream metadata with stored normalization rules.
- Convert media files through `ffmpeg` into a normalized output layout, including video recoding, audio transcoding to Opus, metadata cleanup and rewrite, and controlled disposition flags.
- Build output filenames from detected or configured show, season, and episode information, optionally enriched from TMDB and a configurable Jinja-style filename template.
- Support auxiliary file operations such as subtitle import, unmuxing, crop detection, rename-only conversion runs, and direct in-place episode renaming.
- Supported environments:
- Local execution on a Python-capable workstation.
- Best-supported on Linux-like systems because the implementation assumes `~/.local`, `/dev/null`, `nice`, and `cpulimit`.
- Requires `ffmpeg`, `ffprobe`, and `cpulimit` on `PATH`.
- Operational owner: The local user running the tool and maintaining its config, database, and external tooling.
## Suggested User Stories
- As a library maintainer, I want to define show-specific matching rules once so that future source files can be normalized automatically.
- As an operator, I want to inspect a file before conversion so that I can compare its actual streams and tags against the stored target schema.
- As a user preparing web-playback files, I want to recode video and audio with a small set of predictable options so that results are compatible and consistently named.
- As a user dealing with nonstandard releases, I want CLI overrides for language, title, stream order, default and forced tracks, and season and episode data so that one-off fixes do not require database edits first.
- As a user importing anime or other shifted numbering schemes, I want season and episode offsets at the show level with optional pattern-specific overrides so that generated filenames align with TMDB and media-library expectations.
## Functional Requirements
- The system shall provide a CLI entrypoint named `ffx` with commands for `convert`, `inspect`, `shows`, `rename`, `unmux`, `cropdetect`, `setup`, `configure_workstation`, `upgrade`, `version`, and `help`.
- The system shall support a two-step local installation and preparation flow:
- `tools/setup.sh` is the bootstrap entrypoint for the first step and shall own bundle virtualenv creation, package installation, shell alias exposure, and optional Python test-package installation.
- `tools/configure_workstation.sh` is the bootstrap entrypoint for the second step and shall own workstation dependency checks and installation plus local config and directory seeding.
- After the bundle is installed, `ffx setup` and `ffx configure_workstation` shall remain aligned wrapper entrypoints for those same two steps.
- The CLI command `ffx setup` shall act as a wrapper for the first-step bundle-preparation flow in `tools/setup.sh`.
- The CLI command `ffx configure_workstation` shall act as a wrapper for the second-step preparation flow in `tools/configure_workstation.sh`.
- The system shall persist reusable normalization rules in SQLite for:
- shows and show formatting digits,
- optional show-level notes,
- optional show-level quality defaults,
- regex-based filename patterns,
- per-pattern media tags,
- per-pattern stream definitions,
- show-level and pattern-level shifted-season mappings,
- internal database version properties.
- The system shall apply supported ordered database migrations automatically when opening an older local database file and shall fail fast when no supported path exists.
- Before applying a required database migration, the system shall show the current version, target version, required sequential steps, and whether each corresponding migration module is present, then require user confirmation.
- Before applying a confirmed file-backed database migration, the system shall create an in-place backup copy whose filename includes the covered version range.
- Detailed show, pattern, and duplicate-match management rules live in `requirements/pattern_management.md`.
- The system shall inspect source media using `ffprobe` and derive a structured description of container metadata and streams.
- The system shall optionally open a Textual UI to browse shows, inspect files, and create, edit, or delete shows, patterns, stream definitions, tags, and shifted-season rules.
- The system shall match filenames against stored regex patterns to decide whether an input file should inherit a target stream and metadata schema.
- The system shall convert supported input files (`mkv`, `mp4`, `avi`, `flv`, `webm`) with `ffmpeg`, supporting at least:
- VP9, AV1, and H.264 video encoding,
- Opus audio encoding with bitrate selection based on channel layout,
- metadata and disposition rewriting,
- optional crop detection and crop application,
- optional deinterlacing and denoising,
- optional subtitle import from external files,
- rename-only move mode.
- The system shall support optional TMDB lookups to resolve show names, years, and episode titles when a show ID, season, and episode are available.
- The system shall generate output filenames from show metadata, season and episode indices, and episode names using the configured filename template.
- The system shall allow CLI overrides for stream languages, stream titles, default and forced tracks, stream order, TMDB show and episode data, output directory, label prefix, and processing resource limits.
- The system shall resolve encoding quality by precedence `CLI override -> pattern -> show -> encoder default` and shall report the chosen value and source.
- The system shall resolve season shifting by precedence `pattern -> show -> identity default` and shall report the chosen mapping and source.
- Processing resource limit rules:
- `--nice` shall accept niceness values from `-20` through `19`; omitting the option shall disable niceness adjustment.
- `--cpu` shall accept either a positive absolute `cpulimit` value such as `200`, or a percentage suffixed with `%` such as `25%` to represent a share of present CPUs; omitting the option or using `0` shall disable CPU limiting.
- When both limits are configured, the process wrapper shall execute the target command through `cpulimit` around a `nice -n ...` invocation so both limits apply to the launched media command.
- The system shall support extracting streams into separate files via `unmux` and reporting suggested crop parameters via `cropdetect`.
- The system shall support in-place episode renaming via `rename`, requiring a `--prefix`, accepting optional `--season` and `--suffix` overrides, preserving the source extension, and supporting dry-run output without moving files.
- Crop detection shall use a configurable sampling window, defaulting to a 60-second seek and a 180-second analysis duration, and repeated crop-detection requests for the same source plus sampling window shall reuse cached results within one process.
- The system shall handle invalid input and system failures gracefully by logging warnings or raising `click` errors for missing files, invalid media, missing TMDB credentials, incompatible database versions, and ambiguous track dispositions when prompting is disabled.
## Quality Requirements
- The system should stay understandable as a small local tool: controllers, descriptors, models, and screens should remain separate enough for contributors to trace a workflow end to end.
- The system should produce predictable output for the same database rules, CLI overrides, and source files.
- The system should preserve a lightweight operational footprint: local SQLite state, local log file, no mandatory background services.
- The system should be testable through modern automatically discovered tests and through remaining legacy harness coverage during migration.
- The system should expose enough logging to diagnose failed probes, failed conversions, and rule mismatches without requiring a debugger.
## Constraints And Assumptions
- Technology constraints:
- Python package built with setuptools.
- Primary libraries: `click`, `textual`, `sqlalchemy`, `jinja2`, `requests`.
- Conversion and inspection rely on external executables rather than pure-Python media libraries.
- Hosting or infrastructure constraints:
- Intended for local execution, not server deployment.
- Stores default state in `~/.local/etc/ffx.json`, `~/.local/var/ffx/ffx.db`, and `~/.local/var/log/ffx.log`.
- Timeline constraints:
- The current implemented scope reflects a compact alpha release stream up to version `0.2.4`.
- Team capacity assumptions:
- Maintained as a small codebase where simple patterns and direct controller logic are preferred over framework-heavy abstractions.
- Third-party dependencies:
- `ffmpeg`, `ffprobe`, and `cpulimit`.
- TMDB API access through `TMDB_API_KEY` for metadata enrichment.
- Installation assumptions:
- The Python-side bundle install step and optional Python test extras are managed by `tools/setup.sh`, with `ffx setup` as the aligned wrapper after bootstrap.
- The workstation-preparation step is managed separately by `tools/configure_workstation.sh` or `ffx configure_workstation`.
## Acceptance Scope
- First release boundary:
- Local installation through `pip`.
- Working SQLite-backed rule storage.
- Functional CLI conversion and inspection workflows.
- Textual CRUD flows for shows, patterns, tags, tracks, and shifted seasons.
- TMDB-assisted filename generation, subtitle import, season shifting, database versioning, and configurable output filename templating.
- Excluded follow-up ideas:
- Completing placeholder screens such as settings and help.
- Hardening platform portability beyond Linux-like systems.
- Broader media types, richer release packaging, and production-grade background processing.
- Demonstration scenario:
- Inspect a TV episode file, define or update the matching show and pattern in the TUI, then run `ffx convert` so the result uses the stored stream schema, optional TMDB episode naming, and a normalized output filename.

View File

@@ -1,177 +0,0 @@
# Shifted Seasons Handling
This file defines the behavioral contract for mapping source season and episode
numbering to target season and episode numbering through stored shifted-season
rules.
Primary sources:
- `requirements/project.md`
- `requirements/architecture.md`
- actual tool code in `src/ffx/`
Secondary source:
- `SCRATCHPAD.md`, used only to clarify current hardening gaps and not as the
primary contract source.
## Scope
- Persisting shifted-season rules in SQLite.
- Allowing shifted-season rules to be attached either to a show or to a
specific pattern.
- Selecting at most one active shifted-season rule for one concrete source
season and episode tuple.
- Applying additive season and episode offsets to produce target numbering.
- Using shifted target numbering during `convert` for TMDB episode lookup and
generated season and episode filename tokens.
- Managing show-level default mappings and pattern-level override mappings from
the Textual editing workflows.
## Out Of Scope
- General filename parsing rules for detecting season and episode values.
- Standalone `rename` command behavior, which currently uses explicit rename
inputs rather than stored shifted-season rules.
- Stream or track mapping behavior unrelated to season and episode numbering.
## Terms
- `shifted-season rule`: one persisted row describing how one source-numbering
range maps to target numbering through additive offsets.
- `show-level shifted-season rule`: a rule attached directly to a show and used
as the fallback mapping layer for that show.
- `pattern-level shifted-season rule`: a rule attached directly to a pattern and
used as the override mapping layer for that pattern.
- `source numbering`: the season and episode values detected from the current
source file or supplied as source-side conversion inputs before shifting.
- `target numbering`: the season and episode values after one active
shifted-season rule has been applied.
- `original season`: the source-domain season number a shifted-season rule is
eligible to match.
- `episode range`: the optional source-domain episode interval covered by one
shifted-season rule.
- `open bound`: an unbounded start or end of the episode range. Current storage
uses `-1` as the internal sentinel for an open bound.
- `active shifted-season rule`: the single rule selected for one concrete input
after precedence resolution.
- `identity mapping`: the default `1:1` outcome where source numbering is used
unchanged.
## Rules
- `SHIFTED_SEASONS_HANDLING-0001`: The domain model shall allow a
shifted-season rule to be owned by exactly one of:
- one show
- one pattern
- `SHIFTED_SEASONS_HANDLING-0002`: A single shifted-season rule shall not
belong to both a show and a pattern at the same time.
- `SHIFTED_SEASONS_HANDLING-0003`: A shifted-season rule shall carry these
fields: `original_season`, `first_episode`, `last_episode`,
`season_offset`, and `episode_offset`.
- `SHIFTED_SEASONS_HANDLING-0004`: `season_offset` and `episode_offset` shall
be additive signed integers applied to matched source numbering to produce
target numbering.
- `SHIFTED_SEASONS_HANDLING-0005`: A shifted-season rule shall match a source
tuple only when:
- the source season equals `original_season`
- the source episode is greater than or equal to `first_episode` when the
lower bound is closed
- the source episode is less than or equal to `last_episode` when the upper
bound is closed
- `SHIFTED_SEASONS_HANDLING-0006`: An open lower or upper episode bound shall
represent an unbounded side of the covered source episode range.
- `SHIFTED_SEASONS_HANDLING-0007`: If one shifted-season rule matches, target
numbering shall be:
- `target season = source season + season_offset`
- `target episode = source episode + episode_offset`
- `SHIFTED_SEASONS_HANDLING-0008`: If no shifted-season rule matches, source
numbering shall pass through unchanged.
- `SHIFTED_SEASONS_HANDLING-0009`: Shifted-season handling shall operate in a
source-to-target numbering model. Stored rules map detected source numbering
to the target numbering used by conversion-facing metadata and output naming.
- `SHIFTED_SEASONS_HANDLING-0010`: Pattern matching identifies the owning show
and optionally a more specific owning pattern. Resolution of the active
shifted-season rule shall use this precedence order:
- matching pattern-level rule
- matching show-level rule
- identity mapping
- `SHIFTED_SEASONS_HANDLING-0011`: At most one shifted-season rule may be
active for one concrete source season and episode tuple. Shifted-season rules
shall never stack or compose.
- `SHIFTED_SEASONS_HANDLING-0012`: Within one owner scope, shifted-season rules
shall not overlap in their effective episode coverage for the same
`original_season`.
- `SHIFTED_SEASONS_HANDLING-0013`: If a shifted-season rule uses two closed
episode bounds, `last_episode` shall be greater than or equal to
`first_episode`.
- `SHIFTED_SEASONS_HANDLING-0014`: Shifted-season rule evaluation shall be
deterministic. Released behavior shall not depend on arbitrary database row
order when invalid overlapping rules exist.
- `SHIFTED_SEASONS_HANDLING-0015`: A pattern-level rule is permitted to map to
zero offsets. Such a rule is a valid explicit override that beats show-level
fallback and produces identity mapping for its covered source range.
- `SHIFTED_SEASONS_HANDLING-0016`: During `convert`, when show, season, and
episode values are available and stored shifting is active, the shifted target
numbering shall drive:
- TMDB episode lookup
- season and episode filename tokens such as `S01E02`
- generated episode basenames that include season and episode numbering
- `SHIFTED_SEASONS_HANDLING-0017`: When conversion is supplied explicit
target-domain season or episode values for TMDB naming, the system shall not
apply stored shifting on top of those already-targeted values.
- `SHIFTED_SEASONS_HANDLING-0018`: Operator-facing editing shall expose
shifted-season rule management in both of these places:
- show editing for show-level default mappings
- pattern editing for pattern-level override mappings
- `SHIFTED_SEASONS_HANDLING-0019`: User-facing shifted-season editing should
present open episode bounds as a natural empty-state input rather than forcing
operators to type the internal sentinel directly.
## Acceptance
- A show can exist with zero or more show-level shifted-season rules.
- A pattern can exist with zero or more pattern-level shifted-season rules.
- A shifted-season rule is stored against exactly one owner scope.
- A source tuple matching a pattern-level rule yields target numbering from that
rule even when a matching show-level rule also exists.
- A source tuple matching no pattern-level rule but matching a show-level rule
yields target numbering from the show-level rule.
- A source tuple matching neither scope yields identity mapping.
- A pattern-level zero-offset rule can explicitly override a nonzero show-level
rule for the same covered source range.
- Two shifted-season rules for the same owner scope and original season cannot
both be valid if they cover overlapping episode ranges.
- During `convert`, shifted numbering is what TMDB episode lookup and generated
season and episode tokens see when stored shifting is active.
- The TUI can display and maintain shifted-season rules from both the show and
pattern editing flows.
## Current Code Fit
- `src/ffx/model/show.py` and `src/ffx/model/pattern.py` now both expose
shifted-season relationships, and `src/ffx/model/shifted_season.py` stores
each rule against exactly one owner scope through `show_id` or `pattern_id`.
- `src/ffx/shifted_season_controller.py` now resolves mappings with
pattern-over-show precedence and applies at most one active rule for a source
tuple.
- `src/ffx/show_details_screen.py`,
`src/ffx/shifted_season_details_screen.py`, and
`src/ffx/shifted_season_delete_screen.py` provide reusable shifted-season
editing dialogs, and `src/ffx/pattern_details_screen.py` now exposes the
pattern-level override flow.
- `src/ffx/cli.py` now resolves shifted numbering during `convert` from:
pattern-level match, then show-level match, then identity mapping.
- `src/ffx/database.py` now migrates version-2 databases to version 3 by
preserving existing show-level rows and extending the schema for pattern-level
ownership.
## Risks
- The current CLI groups `--show`, `--season`, and `--episode` under one
override bucket used for TMDB-related behavior. Source-domain versus
target-domain semantics of each override must stay documented clearly so
stored shifting is neither skipped nor double-applied unexpectedly.
- Existing version-2 databases only contain show-owned shifted-season rows, so a
version-3 migration must preserve those rows as the show-level fallback layer.
- Current modern automated test coverage for shifted-season behavior is light,
so precedence, migration, and convert-time numbering behavior need focused
tests.

View File

@@ -1,74 +0,0 @@
# Subtrack Mapping
This file defines the behavioral contract for mapping input subtracks to output
subtracks during conversion.
Primary source: actual tool code in `src/ffx/`.
Secondary source: `tests/legacy/`, used only to clarify intent and reveal gaps.
## Scope
- Ensuring each target subtrack is created from the corresponding source-subtrack information, including stream-level metadata.
- Mapping input streams to output streams during conversion.
- Using persisted pattern-track definitions from the database as the target schema.
- Allowing omission and reordering of retained tracks.
- Keeping stream-level metadata attached to the correct source-derived logical track after remapping.
- Normalizing target output into ordered track groups: video, audio, subtitle, then special types such as fonts or images.
## Terms
- `source_index`: identity of the originating input stream from ffprobe or an imported source descriptor.
- `index`: final output-track order across all retained tracks.
- `sub_index`: per-type position within the retained tracks of one type, for example audio stream `0` or subtitle stream `1`.
- `target schema`: stored or constructed output-track definition that decides which tracks are kept, omitted, reordered, and rewritten.
- `separate source file`: additional file bound to one target track slot whose media payload replaces the regular source payload for that slot.
## Rules
- `SUBTRACK_MAPPING-0001`: The system shall represent source-stream identity separately from output order. `source_index`, `index`, and `sub_index` are distinct concepts and shall not be collapsed into one field.
- `SUBTRACK_MAPPING-0002`: The system shall derive `source_index` for probed tracks from the original ffprobe stream index and preserve that identity through conversion planning.
- `SUBTRACK_MAPPING-0003`: Pattern-backed track definitions stored in the database shall persist both target output order and originating source-stream identity.
- `SUBTRACK_MAPPING-0004`: When a filename matches a pattern, the pattern target schema shall be the source of truth for which source tracks are retained, which are omitted, and in what order retained tracks appear in the output.
- `SUBTRACK_MAPPING-0005`: A target track may refer only to an existing source track of the same type. Conversion shall fail fast when a target track refers to a nonexistent source stream or a source stream of a different type.
- `SUBTRACK_MAPPING-0006`: The ffmpeg mapping phase shall be generated from target output order while resolving each retained output track back to its originating source stream via `source_index`.
- `SUBTRACK_MAPPING-0007`: Reordering and omission shall preserve logical track identity. Stream-level metadata, titles, languages, and disposition decisions shall stay attached to the correct source-derived logical track after mapping.
- `SUBTRACK_MAPPING-0008`: The system shall support one-off CLI stream-order overrides without requiring prior database edits.
- `SUBTRACK_MAPPING-0009`: Operator-facing inspection and editing surfaces shall expose enough source-versus-target information to let a user reason about subtrack mapping decisions.
- `SUBTRACK_MAPPING-0010`: Test coverage for subtrack mapping shall assert source-derived identity, omission, and output order explicitly. Final track counts or final type sequences alone are insufficient proof of correct mapping.
- `SUBTRACK_MAPPING-0011`: Retained target tracks shall appear in ordered groups: video track or tracks first, then audio tracks, then subtitle tracks, then special types such as fonts or images. Within each group, the target schema shall define the order.
- `SUBTRACK_MAPPING-0012`: Track omission is valid when required by output compatibility, when needed to normalize source tracks into the required target group order and schema, or when explicitly requested by database rules or CLI options.
- `SUBTRACK_MAPPING-0013`: If source tracks do not already comply with the required target group order, conversion shall reorder retained tracks to match the target ordering contract without losing source-track identity or stream-level metadata lineage.
## Separate Additional Source Files
- `SUBTRACK_MAPPING-0014`: A separate source file may substitute the media payload of one target subtrack without changing that target track's intended output position.
- `SUBTRACK_MAPPING-0015`: When a separate source file is used, the target track shall remain bound to the corresponding logical source track for mapping, validation, and metadata lineage.
- `SUBTRACK_MAPPING-0016`: Metadata for a substituted target track shall be merged from the regular source track and the separate source file when available.
- `SUBTRACK_MAPPING-0017`: If the separate source file provides a metadata field that is also present on the regular source track, the separate source file value shall win in the target output.
- `SUBTRACK_MAPPING-0018`: If a metadata field is absent from the separate source file, the system shall fall back to the corresponding metadata from the regular source track or target schema rewrite rules.
## Acceptance
- Given a source media descriptor and a pattern-backed target schema, the planned output tracks can be listed in final output order and each retained track can still be traced to one originating source stream.
- Planned output order follows grouped target order: video, audio, subtitle, then special types.
- Tracks not referenced by the target schema are omitted from output mapping.
- Tracks may also be omitted when they are incompatible with the chosen output format or explicitly excluded by database or CLI rules.
- Two retained target tracks never originate from the same source stream unless duplication is implemented explicitly as a separate feature.
- If target-track metadata is rewritten after reordering, it is written onto the correct source-derived logical track rather than the track that merely occupies the same final output position.
- Invalid target-to-source references fail deterministically before the conversion job is launched.
- If a separate source file substitutes one target track, that track keeps its target slot and ordering while metadata is merged with separate-file values taking precedence when both sides provide the same field.
- A test proving subtrack mapping must assert at least one of: exact `source_index` to output-order mapping, omission of named source tracks, or preservation of per-track metadata after reorder.
## Test Notes
- `tests/legacy/scenario.py` names pattern behavior as `Filter/Reorder Tracks`.
- `tests/legacy/scenario_4.py` is the strongest end-to-end signal because it runs DB-backed conversion and reapplies source indices before assertion.
- `tests/legacy/track_tag_combinator_2_0.py` and `tests/legacy/track_tag_combinator_3_4.py` sort result tracks by `source_index` before checking tags, which matches the intended identity model.
- Legacy permutation combinators define permutations but their assertion functions are stubs.
- Some legacy scenarios produce `AP` and `SP` selectors but do not execute them.
## Risks
- `src/ffx/media_descriptor.py` contains an explicit `rearrangeTrackDescriptors()` path whose current implementation appears defective and under-tested.
- Separate-source-file metadata precedence is only partly expressed in current implementation paths and should be covered directly in the rewritten test suite.
- Production code expresses the mapping contract more clearly than the legacy harness, so a rewrite should add direct logic-level tests for mapping and reorder planning.

View File

@@ -1,144 +0,0 @@
# Test Rewrite
This file captures the structure executed by `tests/legacy_runner.py` today and
defines the target shape for a complete rewrite.
Detailed product rules for source-to-target subtrack mapping live in
`requirements/subtrack_mapping.md`. This file describes only how tests cover
that area.
## Interpreter Requirement
- Agents shall run Python-side test commands with `~/.local/share/ffx.venv/bin/python`.
- This applies to the legacy harness, `unittest`, `pytest`, helper scripts, and `python -m ffx ...` test invocations.
- Agents shall not silently substitute `python`, `python3`, or another interpreter for Python-side test work.
- If `~/.local/share/ffx.venv/bin/python` is missing or not executable, agents shall stop and report the missing venv instead of continuing with Python-side test execution.
## Shell Environment Requirement
- Agents shall source `~/.bashrc` from an interactive Bash shell before running TMDB-dependent test commands or TMDB-dependent `python -m ffx ...` test invocations.
- Agents shall not source `~/.bashrc.d/interactive/77_tmdb.sh` directly for normal test work; `~/.bashrc` is the required entry point.
- In automation this means agents shall use an interactive Bash invocation such as `bash -ic 'source ~/.bashrc && ...'`, because a non-interactive `bash -lc` returns from `~/.bashrc` before the interactive fragments are loaded.
- If sourcing `~/.bashrc` still does not provide required shell environment such as `TMDB_API_KEY`, agents shall stop and report the missing environment instead of continuing with TMDB-dependent test execution.
## Current Harness
- Entrypoint: `~/.local/share/ffx.venv/bin/python tests/legacy_runner.py run`
- Runner style: custom Click CLI, not `pytest` or `unittest`
- Commands:
- `run`: discover scenario files, instantiate each scenario, run yielded jobs
- `dupe`: helper command that creates duplicate media fixtures; not part of the test run
- Filters: `--scenario`, `--variant`, `--limit`
- Shared context:
- builds one mutable dict for the whole run
- installs loggers and writes `ffx_test_report.log`
- creates `ConfigurationController` eagerly
- tracks only passed and failed counters
- Discovery:
- scenario files: `tests/legacy/scenario_*.py`
- combinators: `glob + importlib + inspect` by filename convention
- ordering: implicit glob order, no explicit sorting
- Skip behavior:
- Scenario 4 is skipped when `TMDB_API_KEY` is missing
- only `TMDB_API_KEY_NOT_PRESENT_EXCEPTION` is caught at scenario construction time
## Current Scenarios
- `1`: `tests/legacy/scenario_1.py`
- focus: basename generation without pattern lookup or TMDB
- inputs per job: `1`
- jobs: `140`
- expected failures: `0`
- execution: build one synthetic source file, run `~/.local/share/ffx.venv/bin/python -m ffx convert`, assert filename selectors only
- selectors executed: `B`, `L`, `I`
- selectors defined but not executed: `S`, `R`
- `2`: `tests/legacy/scenario_2.py`
- focus: conversion matrix over media layouts, dispositions, tags, and permutations
- inputs per job: `1`
- jobs: `8193`
- expected failures: `3267`
- execution: build one synthetic source file, run `~/.local/share/ffx.venv/bin/python -m ffx convert`, probe result with `FileProperties`, assert track layout and selected audio and subtitle metadata
- selectors executed: `M`, `AD`, `AT`, `SD`, `ST`
- selectors defined but not executed: `MT`, `AP`, `SP`, `J`
- `4`: `tests/legacy/scenario_4.py`
- focus: pattern-driven batch conversion with SQLite state and live TMDB naming
- inputs per job: `6`
- jobs: `768`
- expected failures: `336`
- execution: build six synthetic preset files, recreate temp SQLite DB, insert show and pattern, run one batch convert command via `~/.local/share/ffx.venv/bin/python`, query TMDB during assertions
- selectors executed: `M`, `AD`, `AT`, `SD`, `ST`
- selectors defined but not executed: `MT`, `AP`, `SP`, `J`
- notes:
- uses `MediaCombinator6` only
- issues live HTTP requests through `TmdbController` with no request cache
## Current Combinator Families
- scenario files discovered: `3`
- basename combinators discovered: `2`
- media combinators discovered: `8`
- media tag combinators discovered: `3`
- disposition combinator 2 variants: `4`
- disposition combinator 3 variants: `5`
- track tag combinator 2 variants: `4`
- track tag combinator 3 variants: `5`
- indicator variants: `7`
- label variants: `2`
- show variants: `3`
- release variants: `3`
- permutation 2 variants: `2`
- permutation 3 variants: `3`
## Current Totals
- full run without TMDB: `8333`
- full run with TMDB: `9101`
- Scenario 4 generated source files: `4608`
- Scenario 4 live TMDB episode queries: `4608`
## Current Behavior Areas
- output basename rules for label, season and episode indicator, show name, and release suffix combinations
- track layout normalization across the eight media combinator shapes from `VA` through `VAASSS`
- two-track and three-track disposition edge cases, including intentional failure cases
- two-track and three-track track-tag preservation checks, including checks that sort results by source identity
- container-level media tag handling
- pattern-backed conversion against a temporary SQLite database
- TMDB-assisted episode naming for batch conversion
## Structural Findings
- The suite is process-heavy: most jobs run `ffmpeg` to generate a fixture and then spawn the FFX CLI as a subprocess.
- The suite is integration-first and has almost no isolated unit-level coverage for pure logic.
- The base `Combinator` class is a placeholder and is not the real abstraction boundary used by the suite.
- Many combinator methods are placeholders: there are `25` `pass` statements across the current test modules.
- Several assertion families are never executed because scenario selector dispatch is incomplete.
- Scenario comments mention a Scenario 3, but no `scenario_3.py` exists.
- `tests/legacy/_basename_combinator_1.py` is effectively orphaned because discovery only matches `basename_combinator_*.py`.
- `tests/legacy/disposition_combinator_2_3 .py` contains an embedded space in the filename and is still part of discovery.
- Expected failures are validated only as subprocess return-code matches, not as specific error types or messages.
- The current suite depends on `ffmpeg`, `ffprobe`, SQLite, the local Python environment, and for Scenario 4 a live TMDB API key plus network access.
## Rewrite Target
- Replace the custom Click harness with a standard test runner, preferably `pytest`.
- Split the suite into explicit layers: unit, integration, and optional external-system tests.
- Keep unit tests as the default path and make them runnable without `ffmpeg`, `ffprobe`, TMDB, or a user config directory.
- Model discovery explicitly in code instead of relying on glob-plus-reflection naming conventions.
- Convert the current Cartesian-product combinators into readable parametrized cases grouped by behavior area.
- Preserve the current behavior areas, but represent them with targeted cases instead of thousands of opaque variant IDs.
- Make every assertion family explicit and executable; there must be no selector that is produced but never consumed.
- Replace live TMDB access with fixtures or mocks in normal runs; any live-contract test must be opt-in.
- Replace ad hoc subprocess return-code checks with assertions on typed exceptions, stderr content, or structured outputs.
- Provide small reusable media fixtures or fixture builders so only a narrow integration slice needs `ffmpeg`-generated media.
- Make database tests self-contained and fast through temporary databases and direct controller-level assertions.
- Make ordering, naming, and selection deterministic so a contributor can predict exactly what will run.
- Expose a small smoke suite for quick local runs and CI, plus a separately marked slower integration suite.
- Prefer domain-oriented test modules over combinator-family modules: basename, pattern matching, metadata rewrite, track ordering, TMDB naming, CLI smoke, and failure handling.
## Rewrite Acceptance
- A default local test run finishes quickly and without network access.
- A contributor can identify which behavior a failing test covers without decoding variant strings like `VAASSS-A:D10-S:T001`.
- All current intended failure behaviors remain covered, but each one is asserted directly and readably.
- The rewritten suite can be adopted by CI without requiring live TMDB credentials.

220
src/ffx/_iso_language.py Normal file
View File

@@ -0,0 +1,220 @@
from enum import Enum
import difflib
class IsoLanguage(Enum):
ABKHAZIAN = {"name": "Abkhazian", "iso639_1": "ab", "iso639_2": ["abk"]}
AFAR = {"name": "Afar", "iso639_1": "aa", "iso639_2": ["aar"]}
AFRIKAANS = {"name": "Afrikaans", "iso639_1": "af", "iso639_2": ["afr"]}
AKAN = {"name": "Akan", "iso639_1": "ak", "iso639_2": ["aka"]}
ALBANIAN = {"name": "Albanian", "iso639_1": "sq", "iso639_2": ["sqi", "alb"]}
AMHARIC = {"name": "Amharic", "iso639_1": "am", "iso639_2": ["amh"]}
ARABIC = {"name": "Arabic", "iso639_1": "ar", "iso639_2": ["ara"]}
ARAGONESE = {"name": "Aragonese", "iso639_1": "an", "iso639_2": ["arg"]}
ARMENIAN = {"name": "Armenian", "iso639_1": "hy", "iso639_2": ["hye", "arm"]}
ASSAMESE = {"name": "Assamese", "iso639_1": "as", "iso639_2": ["asm"]}
AVARIC = {"name": "Avaric", "iso639_1": "av", "iso639_2": ["ava"]}
AVESTAN = {"name": "Avestan", "iso639_1": "ae", "iso639_2": ["ave"]}
AYMARA = {"name": "Aymara", "iso639_1": "ay", "iso639_2": ["aym"]}
AZERBAIJANI = {"name": "Azerbaijani", "iso639_1": "az", "iso639_2": ["aze"]}
BAMBARA = {"name": "Bambara", "iso639_1": "bm", "iso639_2": ["bam"]}
BASHKIR = {"name": "Bashkir", "iso639_1": "ba", "iso639_2": ["bak"]}
BASQUE = {"name": "Basque", "iso639_1": "eu", "iso639_2": ["eus", "baq"]}
BELARUSIAN = {"name": "Belarusian", "iso639_1": "be", "iso639_2": ["bel"]}
BENGALI = {"name": "Bengali", "iso639_1": "bn", "iso639_2": ["ben"]}
BISLAMA = {"name": "Bislama", "iso639_1": "bi", "iso639_2": ["bis"]}
BOKMAL = {"name": "Bokmål", "iso639_1": "nb", "iso639_2": ["nob"]}
BOSNIAN = {"name": "Bosnian", "iso639_1": "bs", "iso639_2": ["bos"]}
BRETON = {"name": "Breton", "iso639_1": "br", "iso639_2": ["bre"]}
BULGARIAN = {"name": "Bulgarian", "iso639_1": "bg", "iso639_2": ["bul"]}
BURMESE = {"name": "Burmese", "iso639_1": "my", "iso639_2": ["mya", "bur"]}
CATALAN = {"name": "Catalan", "iso639_1": "ca", "iso639_2": ["cat"]}
CHAMORRO = {"name": "Chamorro", "iso639_1": "ch", "iso639_2": ["cha"]}
CHECHEN = {"name": "Chechen", "iso639_1": "ce", "iso639_2": ["che"]}
CHICHEWA = {"name": "Chichewa", "iso639_1": "ny", "iso639_2": ["nya"]}
CHINESE = {"name": "Chinese", "iso639_1": "zh", "iso639_2": ["zho", "chi"]}
CHURCH_SLAVIC = {"name": "Church Slavic", "iso639_1": "cu", "iso639_2": ["chu"]}
CHUVASH = {"name": "Chuvash", "iso639_1": "cv", "iso639_2": ["chv"]}
CORNISH = {"name": "Cornish", "iso639_1": "kw", "iso639_2": ["cor"]}
CORSICAN = {"name": "Corsican", "iso639_1": "co", "iso639_2": ["cos"]}
CREE = {"name": "Cree", "iso639_1": "cr", "iso639_2": ["cre"]}
CROATIAN = {"name": "Croatian", "iso639_1": "hr", "iso639_2": ["hrv"]}
CZECH = {"name": "Czech", "iso639_1": "cs", "iso639_2": ["ces", "cze"]}
DANISH = {"name": "Danish", "iso639_1": "da", "iso639_2": ["dan"]}
DIVEHI = {"name": "Divehi", "iso639_1": "dv", "iso639_2": ["div"]}
DUTCH = {"name": "Dutch", "iso639_1": "nl", "iso639_2": ["nld", "dut"]}
DZONGKHA = {"name": "Dzongkha", "iso639_1": "dz", "iso639_2": ["dzo"]}
ENGLISH = {"name": "English", "iso639_1": "en", "iso639_2": ["eng"]}
ESPERANTO = {"name": "Esperanto", "iso639_1": "eo", "iso639_2": ["epo"]}
ESTONIAN = {"name": "Estonian", "iso639_1": "et", "iso639_2": ["est"]}
EWE = {"name": "Ewe", "iso639_1": "ee", "iso639_2": ["ewe"]}
FAROESE = {"name": "Faroese", "iso639_1": "fo", "iso639_2": ["fao"]}
FIJIAN = {"name": "Fijian", "iso639_1": "fj", "iso639_2": ["fij"]}
FINNISH = {"name": "Finnish", "iso639_1": "fi", "iso639_2": ["fin"]}
FRENCH = {"name": "French", "iso639_1": "fr", "iso639_2": ["fra", "fre"]}
FULAH = {"name": "Fulah", "iso639_1": "ff", "iso639_2": ["ful"]}
GALICIAN = {"name": "Galician", "iso639_1": "gl", "iso639_2": ["glg"]}
GANDA = {"name": "Ganda", "iso639_1": "lg", "iso639_2": ["lug"]}
GEORGIAN = {"name": "Georgian", "iso639_1": "ka", "iso639_2": ["kat", "geo"]}
GERMAN = {"name": "German", "iso639_1": "de", "iso639_2": ["deu", "ger"]}
GREEK = {"name": "Greek", "iso639_1": "el", "iso639_2": ["ell", "gre"]}
GUARANI = {"name": "Guarani", "iso639_1": "gn", "iso639_2": ["grn"]}
GUJARATI = {"name": "Gujarati", "iso639_1": "gu", "iso639_2": ["guj"]}
HAITIAN = {"name": "Haitian", "iso639_1": "ht", "iso639_2": ["hat"]}
HAUSA = {"name": "Hausa", "iso639_1": "ha", "iso639_2": ["hau"]}
HEBREW = {"name": "Hebrew", "iso639_1": "he", "iso639_2": ["heb"]}
HERERO = {"name": "Herero", "iso639_1": "hz", "iso639_2": ["her"]}
HINDI = {"name": "Hindi", "iso639_1": "hi", "iso639_2": ["hin"]}
HIRI_MOTU = {"name": "Hiri Motu", "iso639_1": "ho", "iso639_2": ["hmo"]}
HUNGARIAN = {"name": "Hungarian", "iso639_1": "hu", "iso639_2": ["hun"]}
ICELANDIC = {"name": "Icelandic", "iso639_1": "is", "iso639_2": ["isl", "ice"]}
IDO = {"name": "Ido", "iso639_1": "io", "iso639_2": ["ido"]}
IGBO = {"name": "Igbo", "iso639_1": "ig", "iso639_2": ["ibo"]}
INDONESIAN = {"name": "Indonesian", "iso639_1": "id", "iso639_2": ["ind"]}
INTERLINGUA = {"name": "Interlingua", "iso639_1": "ia", "iso639_2": ["ina"]}
INTERLINGUE = {"name": "Interlingue", "iso639_1": "ie", "iso639_2": ["ile"]}
INUKTITUT = {"name": "Inuktitut", "iso639_1": "iu", "iso639_2": ["iku"]}
INUPIAQ = {"name": "Inupiaq", "iso639_1": "ik", "iso639_2": ["ipk"]}
IRISH = {"name": "Irish", "iso639_1": "ga", "iso639_2": ["gle"]}
ITALIAN = {"name": "Italian", "iso639_1": "it", "iso639_2": ["ita"]}
JAPANESE = {"name": "Japanese", "iso639_1": "ja", "iso639_2": ["jpn"]}
JAVANESE = {"name": "Javanese", "iso639_1": "jv", "iso639_2": ["jav"]}
KALAALLISUT = {"name": "Kalaallisut", "iso639_1": "kl", "iso639_2": ["kal"]}
KANNADA = {"name": "Kannada", "iso639_1": "kn", "iso639_2": ["kan"]}
KANURI = {"name": "Kanuri", "iso639_1": "kr", "iso639_2": ["kau"]}
KASHMIRI = {"name": "Kashmiri", "iso639_1": "ks", "iso639_2": ["kas"]}
KAZAKH = {"name": "Kazakh", "iso639_1": "kk", "iso639_2": ["kaz"]}
KHMER = {"name": "Khmer", "iso639_1": "km", "iso639_2": ["khm"]}
KIKUYU = {"name": "Kikuyu", "iso639_1": "ki", "iso639_2": ["kik"]}
KINYARWANDA = {"name": "Kinyarwanda", "iso639_1": "rw", "iso639_2": ["kin"]}
KIRGHIZ = {"name": "Kirghiz", "iso639_1": "ky", "iso639_2": ["kir"]}
KOMI = {"name": "Komi", "iso639_1": "kv", "iso639_2": ["kom"]}
KONGO = {"name": "Kongo", "iso639_1": "kg", "iso639_2": ["kon"]}
KOREAN = {"name": "Korean", "iso639_1": "ko", "iso639_2": ["kor"]}
KUANYAMA = {"name": "Kuanyama", "iso639_1": "kj", "iso639_2": ["kua"]}
KURDISH = {"name": "Kurdish", "iso639_1": "ku", "iso639_2": ["kur"]}
LAO = {"name": "Lao", "iso639_1": "lo", "iso639_2": ["lao"]}
LATIN = {"name": "Latin", "iso639_1": "la", "iso639_2": ["lat"]}
LATVIAN = {"name": "Latvian", "iso639_1": "lv", "iso639_2": ["lav"]}
LIMBURGAN = {"name": "Limburgan", "iso639_1": "li", "iso639_2": ["lim"]}
LINGALA = {"name": "Lingala", "iso639_1": "ln", "iso639_2": ["lin"]}
LITHUANIAN = {"name": "Lithuanian", "iso639_1": "lt", "iso639_2": ["lit"]}
LUBA_KATANGA = {"name": "Luba-Katanga", "iso639_1": "lu", "iso639_2": ["lub"]}
LUXEMBOURGISH = {"name": "Luxembourgish", "iso639_1": "lb", "iso639_2": ["ltz"]}
MACEDONIAN = {"name": "Macedonian", "iso639_1": "mk", "iso639_2": ["mkd", "mac"]}
MALAGASY = {"name": "Malagasy", "iso639_1": "mg", "iso639_2": ["mlg"]}
MALAY = {"name": "Malay", "iso639_1": "ms", "iso639_2": ["msa", "may"]}
MALAYALAM = {"name": "Malayalam", "iso639_1": "ml", "iso639_2": ["mal"]}
MALTESE = {"name": "Maltese", "iso639_1": "mt", "iso639_2": ["mlt"]}
MANX = {"name": "Manx", "iso639_1": "gv", "iso639_2": ["glv"]}
MAORI = {"name": "Maori", "iso639_1": "mi", "iso639_2": ["mri", "mao"]}
MARATHI = {"name": "Marathi", "iso639_1": "mr", "iso639_2": ["mar"]}
MARSHALLESE = {"name": "Marshallese", "iso639_1": "mh", "iso639_2": ["mah"]}
MONGOLIAN = {"name": "Mongolian", "iso639_1": "mn", "iso639_2": ["mon"]}
NAURU = {"name": "Nauru", "iso639_1": "na", "iso639_2": ["nau"]}
NAVAJO = {"name": "Navajo", "iso639_1": "nv", "iso639_2": ["nav"]}
NDONGA = {"name": "Ndonga", "iso639_1": "ng", "iso639_2": ["ndo"]}
NEPALI = {"name": "Nepali", "iso639_1": "ne", "iso639_2": ["nep"]}
NORTH_NDEBELE = {"name": "North Ndebele", "iso639_1": "nd", "iso639_2": ["nde"]}
NORTHERN_SAMI = {"name": "Northern Sami", "iso639_1": "se", "iso639_2": ["sme"]}
NORWEGIAN = {"name": "Norwegian", "iso639_1": "no", "iso639_2": ["nor"]}
NORWEGIAN_NYNORSK = {"name": "Nynorsk", "iso639_1": "nn", "iso639_2": ["nno"]}
OCCITAN = {"name": "Occitan", "iso639_1": "oc", "iso639_2": ["oci"]}
OJIBWA = {"name": "Ojibwa", "iso639_1": "oj", "iso639_2": ["oji"]}
ORIYA = {"name": "Oriya", "iso639_1": "or", "iso639_2": ["ori"]}
OROMO = {"name": "Oromo", "iso639_1": "om", "iso639_2": ["orm"]}
OSSETIAN = {"name": "Ossetian", "iso639_1": "os", "iso639_2": ["oss"]}
PALI = {"name": "Pali", "iso639_1": "pi", "iso639_2": ["pli"]}
PANJABI = {"name": "Panjabi", "iso639_1": "pa", "iso639_2": ["pan"]}
PERSIAN = {"name": "Persian", "iso639_1": "fa", "iso639_2": ["fas", "per"]}
POLISH = {"name": "Polish", "iso639_1": "pl", "iso639_2": ["pol"]}
PORTUGUESE = {"name": "Portuguese", "iso639_1": "pt", "iso639_2": ["por"]}
PUSHTO = {"name": "Pushto", "iso639_1": "ps", "iso639_2": ["pus"]}
QUECHUA = {"name": "Quechua", "iso639_1": "qu", "iso639_2": ["que"]}
ROMANIAN = {"name": "Romanian", "iso639_1": "ro", "iso639_2": ["ron", "rum"]}
ROMANSH = {"name": "Romansh", "iso639_1": "rm", "iso639_2": ["roh"]}
RUNDI = {"name": "Rundi", "iso639_1": "rn", "iso639_2": ["run"]}
RUSSIAN = {"name": "Russian", "iso639_1": "ru", "iso639_2": ["rus"]}
SAMOAN = {"name": "Samoan", "iso639_1": "sm", "iso639_2": ["smo"]}
SANGO = {"name": "Sango", "iso639_1": "sg", "iso639_2": ["sag"]}
SANSKRIT = {"name": "Sanskrit", "iso639_1": "sa", "iso639_2": ["san"]}
SARDINIAN = {"name": "Sardinian", "iso639_1": "sc", "iso639_2": ["srd"]}
SCOTTISH_GAELIC = {"name": "Scottish Gaelic", "iso639_1": "gd", "iso639_2": ["gla"]}
SERBIAN = {"name": "Serbian", "iso639_1": "sr", "iso639_2": ["srp"]}
SHONA = {"name": "Shona", "iso639_1": "sn", "iso639_2": ["sna"]}
SICHUAN_YI = {"name": "Sichuan Yi", "iso639_1": "ii", "iso639_2": ["iii"]}
SINDHI = {"name": "Sindhi", "iso639_1": "sd", "iso639_2": ["snd"]}
SINHALA = {"name": "Sinhala", "iso639_1": "si", "iso639_2": ["sin"]}
SLOVAK = {"name": "Slovak", "iso639_1": "sk", "iso639_2": ["slk", "slo"]}
SLOVENIAN = {"name": "Slovenian", "iso639_1": "sl", "iso639_2": ["slv"]}
SOMALI = {"name": "Somali", "iso639_1": "so", "iso639_2": ["som"]}
SOUTH_NDEBELE = {"name": "South Ndebele", "iso639_1": "nr", "iso639_2": ["nbl"]}
SOUTHERN_SOTHO = {"name": "Southern Sotho", "iso639_1": "st", "iso639_2": ["sot"]}
SPANISH = {"name": "Spanish", "iso639_1": "es", "iso639_2": ["spa"]}
SUNDANESE = {"name": "Sundanese", "iso639_1": "su", "iso639_2": ["sun"]}
SWAHILI = {"name": "Swahili", "iso639_1": "sw", "iso639_2": ["swa"]}
SWATI = {"name": "Swati", "iso639_1": "ss", "iso639_2": ["ssw"]}
SWEDISH = {"name": "Swedish", "iso639_1": "sv", "iso639_2": ["swe"]}
TAGALOG = {"name": "Tagalog", "iso639_1": "tl", "iso639_2": ["tgl"]}
TAHITIAN = {"name": "Tahitian", "iso639_1": "ty", "iso639_2": ["tah"]}
TAJIK = {"name": "Tajik", "iso639_1": "tg", "iso639_2": ["tgk"]}
TAMIL = {"name": "Tamil", "iso639_1": "ta", "iso639_2": ["tam"]}
TATAR = {"name": "Tatar", "iso639_1": "tt", "iso639_2": ["tat"]}
TELUGU = {"name": "Telugu", "iso639_1": "te", "iso639_2": ["tel"]}
THAI = {"name": "Thai", "iso639_1": "th", "iso639_2": ["tha"]}
TIBETAN = {"name": "Tibetan", "iso639_1": "bo", "iso639_2": ["bod", "tib"]}
TIGRINYA = {"name": "Tigrinya", "iso639_1": "ti", "iso639_2": ["tir"]}
TONGA = {"name": "Tonga", "iso639_1": "to", "iso639_2": ["ton"]}
TSONGA = {"name": "Tsonga", "iso639_1": "ts", "iso639_2": ["tso"]}
TSWANA = {"name": "Tswana", "iso639_1": "tn", "iso639_2": ["tsn"]}
TURKISH = {"name": "Turkish", "iso639_1": "tr", "iso639_2": ["tur"]}
TURKMEN = {"name": "Turkmen", "iso639_1": "tk", "iso639_2": ["tuk"]}
TWI = {"name": "Twi", "iso639_1": "tw", "iso639_2": ["twi"]}
UIGHUR = {"name": "Uighur", "iso639_1": "ug", "iso639_2": ["uig"]}
UKRAINIAN = {"name": "Ukrainian", "iso639_1": "uk", "iso639_2": ["ukr"]}
URDU = {"name": "Urdu", "iso639_1": "ur", "iso639_2": ["urd"]}
UZBEK = {"name": "Uzbek", "iso639_1": "uz", "iso639_2": ["uzb"]}
VENDA = {"name": "Venda", "iso639_1": "ve", "iso639_2": ["ven"]}
VIETNAMESE = {"name": "Vietnamese", "iso639_1": "vi", "iso639_2": ["vie"]}
VOLAPUK = {"name": "Volapük", "iso639_1": "vo", "iso639_2": ["vol"]}
WALLOON = {"name": "Walloon", "iso639_1": "wa", "iso639_2": ["wln"]}
WELSH = {"name": "Welsh", "iso639_1": "cy", "iso639_2": ["cym", "wel"]}
WESTERN_FRISIAN = {"name": "Western Frisian", "iso639_1": "fy", "iso639_2": ["fry"]}
WOLOF = {"name": "Wolof", "iso639_1": "wo", "iso639_2": ["wol"]}
XHOSA = {"name": "Xhosa", "iso639_1": "xh", "iso639_2": ["xho"]}
YIDDISH = {"name": "Yiddish", "iso639_1": "yi", "iso639_2": ["yid"]}
YORUBA = {"name": "Yoruba", "iso639_1": "yo", "iso639_2": ["yor"]}
ZHUANG = {"name": "Zhuang", "iso639_1": "za", "iso639_2": ["zha"]}
ZULU = {"name": "Zulu", "iso639_1": "zu", "iso639_2": ["zul"]}
FILIPINO = {"name": "Filipino", "iso639_1": "tl", "iso639_2": ["fil"]}
UNDEFINED = {"name": "undefined", "iso639_1": "xx", "iso639_2": ["und"]}
@staticmethod
def find(label : str):
closestMatches = difflib.get_close_matches(label, [l.value["name"] for l in IsoLanguage], n=1)
if closestMatches:
foundLangs = [l for l in IsoLanguage if l.value["name"] == closestMatches[0]]
return foundLangs[0] if foundLangs else IsoLanguage.UNDEFINED
else:
return IsoLanguage.UNDEFINED
@staticmethod
def findThreeLetter(theeLetter : str):
foundLangs = [l for l in IsoLanguage if str(theeLetter) in l.value["iso639_2"]]
return foundLangs[0] if foundLangs else IsoLanguage.UNDEFINED
def label(self):
return str(self.value["name"])
def twoLetter(self):
return str(self.value["iso639_1"])
def threeLetter(self):
return str(self.value["iso639_2"][0])

View File

@@ -0,0 +1,67 @@
from enum import Enum
import os
class AttachmentFormat(Enum):
TTF = {'identifier': 'ttf', 'format': None, 'extension': 'ttf', 'label': 'TTF'}
PNG = {'identifier': 'png', 'format': None, 'extension': 'png', 'label': 'PNG'}
UNKNOWN = {'identifier': 'unknown', 'format': None, 'extension': None, 'label': 'UNKNOWN'}
def identifier(self):
return str(self.value['identifier'])
def label(self):
return str(self.value['label'])
def format(self):
return self.value['format']
def extension(self):
return str(self.value['extension'])
@staticmethod
def identify(identifier: str):
formats = [f for f in AttachmentFormat if f.value['identifier'] == str(identifier)]
if formats:
return formats[0]
return AttachmentFormat.UNKNOWN
@staticmethod
def identifyFfprobeStream(streamObj: dict):
identifier = streamObj.get("codec_name")
identifiedFormat = AttachmentFormat.identify(identifier)
if identifiedFormat != AttachmentFormat.UNKNOWN:
return identifiedFormat
if str(streamObj.get("codec_type", "")).strip() != "attachment":
return AttachmentFormat.UNKNOWN
tags = streamObj.get("tags", {}) or {}
mimetype = str(tags.get("mimetype", "")).strip().lower()
filename = str(tags.get("filename", "")).strip().lower()
filenameExtension = os.path.splitext(filename)[1]
if (
mimetype in {
"font/ttf",
"application/x-truetype-font",
"application/x-font-ttf",
}
or "truetype" in mimetype
or filenameExtension == ".ttf"
):
return AttachmentFormat.TTF
if mimetype in {"image/png", "image/x-png"} or filenameExtension == ".png":
return AttachmentFormat.PNG
return AttachmentFormat.UNKNOWN
@staticmethod
def fromTrackCodec(trackCodec):
identifier = getattr(trackCodec, "identifier", None)
if callable(identifier):
return AttachmentFormat.identify(trackCodec.identifier())
return AttachmentFormat.UNKNOWN

View File

@@ -34,6 +34,7 @@ if TYPE_CHECKING:
from ffx.track_descriptor import TrackDescriptor
LIGHTWEIGHT_COMMANDS = {None, 'version', 'help', 'setup', 'configure_workstation', 'upgrade', 'rename'}
CONFIG_ONLY_COMMANDS = {'edit'}
CPU_OPTION_HELP = (
"Limit CPU for started processes. Use an absolute cpulimit value such as 200 "
+ "(about 2 cores), or use a percentage such as 25% for a share of present cores. "
@@ -67,6 +68,14 @@ CUT_OPTION_HELP = (
+ "or --cut START,DURATION for an explicit start and duration. "
+ "Omit to disable."
)
COPY_VIDEO_OPTION_HELP = (
"Copy video streams without re-encoding. Skips video encoder options "
+ "and video filters."
)
COPY_AUDIO_OPTION_HELP = (
"Copy audio streams without re-encoding. Skips audio encoder options "
+ "and audio filters."
)
def normalizeNicenessOption(ctx, param, value):
@@ -249,10 +258,17 @@ def buildRenameTargetFilename(
@click.group()
@click.pass_context
@click.option('--language', 'app_language', type=str, default='', help='Set application language')
@click.option('--database-file', type=str, default='', help='Path to database file')
@click.option(
'--debug',
is_flag=True,
default=False,
help='Enable debug-only TUI diagnostics such as the log pane',
)
@click.option('-v', '--verbose', type=int, default=0, help='Set verbosity of output')
@click.option("--dry-run", is_flag=True, default=False)
def ffx(ctx, database_file, verbose, dry_run):
def ffx(ctx, app_language, database_file, debug, verbose, dry_run):
"""FFX"""
ctx.obj = {}
@@ -260,22 +276,38 @@ def ffx(ctx, database_file, verbose, dry_run):
if ctx.resilient_parsing:
return
from ffx.i18n import (
read_configured_language,
resolve_application_language,
set_current_language,
)
resolvedLanguage = resolve_application_language(
cli_language=app_language,
config_language=read_configured_language(),
)
set_current_language(resolvedLanguage)
ctx.obj['language'] = resolvedLanguage
ctx.obj['debug'] = bool(debug)
if ctx.invoked_subcommand in LIGHTWEIGHT_COMMANDS:
ctx.obj['dry_run'] = dry_run
ctx.obj['verbosity'] = verbose
return
from ffx.configuration_controller import ConfigurationController
from ffx.database import databaseContext
from ffx.logging_utils import configure_ffx_logger
ctx.obj['config'] = ConfigurationController()
ctx.obj['database'] = databaseContext(databasePath=database_file
if database_file else ctx.obj['config'].getDatabaseFilePath())
ctx.obj['dry_run'] = dry_run
ctx.obj['verbosity'] = verbose
ctx.obj['debug'] = bool(debug)
ctx.obj['language'] = resolve_application_language(
cli_language=app_language,
config_language=ctx.obj['config'].getLanguage(),
)
set_current_language(ctx.obj['language'])
# Critical 50
# Error 40
@@ -291,6 +323,17 @@ def ffx(ctx, database_file, verbose, dry_run):
consoleLogVerbosity,
)
if ctx.invoked_subcommand in CONFIG_ONLY_COMMANDS:
return
from ffx.database import databaseContext
ctx.obj['database'] = databaseContext(
databasePath=database_file
if database_file
else ctx.obj['config'].getDatabaseFilePath()
)
# Define a subcommand
@ffx.command()
@@ -303,7 +346,7 @@ def version():
def help():
click.echo(f"ffx {VERSION}\n")
click.echo("Maintenance commands: setup, configure_workstation, upgrade")
click.echo("Media commands: shows, inspect, convert, rename, unmux, cropdetect")
click.echo("Media commands: shows, inspect, edit, convert, rename, unmux, cropdetect")
click.echo("Use 'ffx --help' or 'ffx <command> --help' for full command help.")
@@ -350,6 +393,41 @@ def getTrackedGitChanges(repoPath):
return [line for line in completed.stdout.splitlines() if line.strip()]
def getCurrentGitBranch(repoPath):
completed = subprocess.run(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
cwd=repoPath,
capture_output=True,
text=True,
)
if completed.returncode != 0:
commandLabel = 'git rev-parse --abbrev-ref HEAD'
errorOutput = completed.stderr.strip() or completed.stdout.strip()
raise click.ClickException(
f"Unable to inspect bundle repository branch using '{commandLabel}': {errorOutput}"
)
return completed.stdout.strip() or "unknown"
def getBundleVersion(repoPath):
constantsPath = os.path.join(repoPath, 'src', 'ffx', 'constants.py')
try:
with open(constantsPath, encoding='utf-8') as constantsFile:
for line in constantsFile:
strippedLine = line.strip()
if strippedLine.startswith('VERSION=') or strippedLine.startswith('VERSION ='):
return strippedLine.split('=', 1)[1].strip().strip('"\'')
except OSError as ex:
raise click.ClickException(
f"Unable to inspect bundle version from {constantsPath}: {ex}"
) from ex
raise click.ClickException(f"Unable to inspect bundle version from {constantsPath}")
def runScriptWrapper(ctx, scriptPath, missingDescription, commandArgs):
if not os.path.isfile(scriptPath):
raise click.ClickException(f"{missingDescription} not found at {scriptPath}")
@@ -364,6 +442,20 @@ def runScriptWrapper(ctx, scriptPath, missingDescription, commandArgs):
ctx.exit(completed.returncode)
def runTuiApp(ctx) -> None:
from ffx.ffx_app import FfxApp
from ffx.logging_utils import set_ffx_console_logging_enabled
logger = ctx.obj.get('logger')
set_ffx_console_logging_enabled(logger, enabled=False)
try:
app = FfxApp(ctx.obj)
app.run()
finally:
set_ffx_console_logging_enabled(logger, enabled=True)
@ffx.command(name='setup')
@click.pass_context
@click.option('--check', is_flag=True, default=False, help='Only verify bundle-setup readiness')
@@ -458,19 +550,75 @@ def upgrade(ctx, branch):
if completed.returncode != 0:
ctx.exit(completed.returncode)
upgradedBranch = getCurrentGitBranch(bundleRepoPath)
upgradedVersion = getBundleVersion(bundleRepoPath)
click.echo(f"Updated FFX to version {upgradedVersion} from branch {upgradedBranch}.")
@ffx.command()
@click.pass_context
@click.option('--shift', is_flag=True, default=False, help='Print resolved season-shift mapping for each file instead of opening the TUI')
@click.argument('filenames', nargs=-1)
def inspect(ctx, shift, filenames):
if not filenames:
raise click.ClickException("At least one filename is required.")
if shift:
from ffx.file_properties import FileProperties
from ffx.shifted_season_controller import ShiftedSeasonController
shiftedSeasonController = ShiftedSeasonController(ctx.obj)
for filename in filenames:
fileProperties = FileProperties(ctx.obj, filename)
season = fileProperties.getSeason()
episode = fileProperties.getEpisode()
if season == -1 or episode == -1:
click.echo(f"{filename}: no season/episode recognized")
continue
currentPattern = fileProperties.getPattern()
shiftedSeason, shiftedEpisode, sourceLabel = shiftedSeasonController.resolveShiftSeason(
fileProperties.getShowId(),
season=season,
episode=episode,
patternId=currentPattern.getId() if currentPattern is not None else None,
)
if shiftedSeason == season and shiftedEpisode == episode:
click.echo(f"{filename}: none")
else:
click.echo(
f"{filename}: {season}/{episode} -> {shiftedSeason}/{shiftedEpisode} from {sourceLabel}"
)
return
if len(filenames) != 1:
raise click.ClickException("Inspect without --shift requires exactly one filename.")
ctx.obj['command'] = 'inspect'
ctx.obj['arguments'] = {}
ctx.obj['arguments']['filename'] = filenames[0]
runTuiApp(ctx)
@ffx.command()
@click.pass_context
@click.argument('filename', nargs=1)
def inspect(ctx, filename):
from ffx.ffx_app import FfxApp
def edit(ctx, filename):
if not os.path.isfile(filename):
raise click.ClickException(f"File not found: {filename}")
ctx.obj['command'] = 'inspect'
ctx.obj['arguments'] = {}
ctx.obj['arguments']['filename'] = filename
ctx.obj['command'] = 'edit'
ctx.obj['arguments'] = {'filename': filename}
ctx.obj['use_pattern'] = False
ctx.obj['no_signature'] = True
ctx.obj['apply_metadata_cleanup'] = True
ctx.obj['apply_metadata_normalization'] = True
ctx.obj['resource_limits'] = ctx.obj.get('resource_limits', {})
app = FfxApp(ctx.obj)
app.run()
runTuiApp(ctx)
@ffx.command()
@@ -530,29 +678,33 @@ def rename(ctx, paths, prefix, season, suffix, dry_run):
def getUnmuxSequence(trackDescriptor: TrackDescriptor, sourcePath, targetPrefix, targetDirectory = ''):
from ffx.track_codec import TrackCodec
from ffx.track_type import TrackType
# executable and input file
commandTokens = list(FFMPEG_COMMAND_TOKENS) + ['-i', sourcePath]
trackType = trackDescriptor.getType()
trackCodec = trackDescriptor.getCodec()
trackFormat = trackDescriptor.getFormatDescriptor()
targetPathBase = os.path.join(targetDirectory, targetPrefix) if targetDirectory else targetPrefix
# mapping
commandTokens += ['-map',
f"0:{trackType.indicator()}:{trackDescriptor.getSubIndex()}",
'-c',
'copy']
commandTokens += ['-map', f"0:{trackType.indicator()}:{trackDescriptor.getSubIndex()}"]
trackCodec = trackDescriptor.getCodec()
if trackType == TrackType.VIDEO and trackCodec == TrackCodec.H265:
commandTokens += ['-c:v', 'copy', '-bsf:v', 'hevc_mp4toannexb']
else:
commandTokens += ['-c', 'copy']
# output format
codecFormat = trackCodec.format()
codecFormat = trackFormat.format()
if codecFormat is not None:
commandTokens += ['-f', codecFormat]
# output filename
commandTokens += [f"{targetPathBase}.{trackCodec.extension()}"]
commandTokens += [f"{targetPathBase}.{trackFormat.extension()}"]
return commandTokens
@@ -667,7 +819,7 @@ def unmux(ctx,
if not ctx.obj['dry_run']:
#TODO #425: Codec Enum
ctx.obj['logger'].info(f"Unmuxing stream {trackDescriptor.getIndex()} into file {targetPrefix}.{trackDescriptor.getCodec().extension()}")
ctx.obj['logger'].info(f"Unmuxing stream {trackDescriptor.getIndex()} into file {targetPrefix}.{trackDescriptor.getFormatDescriptor().extension()}")
ctx.obj['logger'].debug(f"Executing unmuxing sequence")
@@ -752,15 +904,12 @@ def cropdetect(ctx,
@click.pass_context
def shows(ctx):
from ffx.ffx_app import FfxApp
ctx.obj['command'] = 'shows'
app = FfxApp(ctx.obj)
app.run()
runTuiApp(ctx)
def checkUniqueDispositions(context, mediaDescriptor: MediaDescriptor):
from ffx.i18n import t
from ffx.track_disposition import TrackDisposition
from ffx.track_type import TrackType
@@ -770,38 +919,38 @@ def checkUniqueDispositions(context, mediaDescriptor: MediaDescriptor):
# The correct tokens should then be created by
if len([v for v in mediaDescriptor.getVideoTracks() if v.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
if context['no_prompt']:
raise click.ClickException('More than one default video stream detected and no prompt set')
defaultVideoTrackSubIndex = click.prompt("More than one default video stream detected! Please select stream", type=int)
raise click.ClickException(t('More than one default video stream detected and no prompt set'))
defaultVideoTrackSubIndex = click.prompt(t("More than one default video stream detected! Please select stream"), type=int)
mediaDescriptor.setDefaultSubTrack(TrackType.VIDEO, defaultVideoTrackSubIndex)
if len([v for v in mediaDescriptor.getVideoTracks() if v.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
if context['no_prompt']:
raise click.ClickException('More than one forced video stream detected and no prompt set')
forcedVideoTrackSubIndex = click.prompt("More than one forced video stream detected! Please select stream", type=int)
raise click.ClickException(t('More than one forced video stream detected and no prompt set'))
forcedVideoTrackSubIndex = click.prompt(t("More than one forced video stream detected! Please select stream"), type=int)
mediaDescriptor.setForcedSubTrack(TrackType.VIDEO, forcedVideoTrackSubIndex)
if len([a for a in mediaDescriptor.getAudioTracks() if a.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
if context['no_prompt']:
raise click.ClickException('More than one default audio stream detected and no prompt set')
defaultAudioTrackSubIndex = click.prompt("More than one default audio stream detected! Please select stream", type=int)
raise click.ClickException(t('More than one default audio stream detected and no prompt set'))
defaultAudioTrackSubIndex = click.prompt(t("More than one default audio stream detected! Please select stream"), type=int)
mediaDescriptor.setDefaultSubTrack(TrackType.AUDIO, defaultAudioTrackSubIndex)
if len([a for a in mediaDescriptor.getAudioTracks() if a.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
if context['no_prompt']:
raise click.ClickException('More than one forced audio stream detected and no prompt set')
forcedAudioTrackSubIndex = click.prompt("More than one forced audio stream detected! Please select stream", type=int)
raise click.ClickException(t('More than one forced audio stream detected and no prompt set'))
forcedAudioTrackSubIndex = click.prompt(t("More than one forced audio stream detected! Please select stream"), type=int)
mediaDescriptor.setForcedSubTrack(TrackType.AUDIO, forcedAudioTrackSubIndex)
if len([s for s in mediaDescriptor.getSubtitleTracks() if s.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
if context['no_prompt']:
raise click.ClickException('More than one default subtitle stream detected and no prompt set')
defaultSubtitleTrackSubIndex = click.prompt("More than one default subtitle stream detected! Please select stream", type=int)
raise click.ClickException(t('More than one default subtitle stream detected and no prompt set'))
defaultSubtitleTrackSubIndex = click.prompt(t("More than one default subtitle stream detected! Please select stream"), type=int)
mediaDescriptor.setDefaultSubTrack(TrackType.SUBTITLE, defaultSubtitleTrackSubIndex)
if len([s for s in mediaDescriptor.getSubtitleTracks() if s.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
if context['no_prompt']:
raise click.ClickException('More than one forced subtitle stream detected and no prompt set')
forcedSubtitleTrackSubIndex = click.prompt("More than one forced subtitle stream detected! Please select stream", type=int)
raise click.ClickException(t('More than one forced subtitle stream detected and no prompt set'))
forcedSubtitleTrackSubIndex = click.prompt(t("More than one forced subtitle stream detected! Please select stream"), type=int)
mediaDescriptor.setForcedSubTrack(TrackType.SUBTITLE, forcedSubtitleTrackSubIndex)
@@ -813,6 +962,8 @@ def checkUniqueDispositions(context, mediaDescriptor: MediaDescriptor):
@click.option('-l', '--label', type=str, default='', help='Label to be used as filename prefix')
@click.option('-v', '--video-encoder', type=str, default=DEFAULT_VIDEO_ENCODER_LABEL, help=f"Target video encoder (vp9, av1, h264 or copy)", show_default=True)
@click.option('--copy-video', is_flag=True, default=False, help=COPY_VIDEO_OPTION_HELP)
@click.option('--copy-audio', is_flag=True, default=False, help=COPY_AUDIO_OPTION_HELP)
@click.option('-q', '--quality', type=str, default="", help=f"Quality settings to be used with VP9/H264 encoder")
@click.option('-p', '--preset', type=str, default="", help=f"Quality preset to be used with AV1 encoder")
@@ -857,7 +1008,6 @@ def checkUniqueDispositions(context, mediaDescriptor: MediaDescriptor):
metavar="DURATION|START,DURATION",
is_flag=False,
flag_value=DEFAULT_CUT_OPTION_VALUE,
default=None,
callback=normalizeCutOption,
help=CUT_OPTION_HELP,
)
@@ -910,6 +1060,8 @@ def convert(ctx,
paths,
label,
video_encoder,
copy_video,
copy_audio,
quality,
preset,
stereo_bitrate,
@@ -969,6 +1121,11 @@ def convert(ctx,
Suffices will we appended to filename in case of multiple created files
or if the filename has not changed."""
from ffx.ffx_controller import FfxController
from ffx.diagnostics import (
FfmpegSkipFileWarning,
getUnremediedIssues,
iterUnremediedIssueSummaryLines,
)
from ffx.file_properties import FileProperties
from ffx.filter.crop_filter import CropFilter
from ffx.filter.deinterlace_filter import DeinterlaceFilter
@@ -989,9 +1146,12 @@ def convert(ctx,
context = ctx.obj
context['video_encoder'] = VideoEncoder.fromLabel(video_encoder)
context['copy_video'] = copy_video
context['copy_audio'] = copy_audio
copyVideoEffective = copy_video or context['video_encoder'] == VideoEncoder.COPY
# HINT: quick and dirty override for h264, todo improve
if context['video_encoder'] in (VideoEncoder.H264, VideoEncoder.COPY):
if context['video_encoder'] in (VideoEncoder.H264, VideoEncoder.COPY) or copy_video or copy_audio:
targetFormat = ''
targetExtension = 'mkv'
else:
@@ -1124,36 +1284,54 @@ def convert(ctx,
tc = TmdbController() if context['use_tmdb'] else None
qualityKwargs = {QualityFilter.QUALITY_KEY: str(quality)}
if copyVideoEffective and quality:
ctx.obj['logger'].warning("Ignoring quality settings because video is being copied")
qualityKwargs = {
QualityFilter.QUALITY_KEY: "" if copyVideoEffective else str(quality)
}
qf = QualityFilter(**qualityKwargs)
if context['video_encoder'] == VideoEncoder.AV1 and preset:
if context['video_encoder'] == VideoEncoder.AV1 and preset and not copyVideoEffective:
presetKwargs = {PresetFilter.PRESET_KEY: preset}
PresetFilter(**presetKwargs)
cf = None
# if crop != 'none':
if crop == 'auto':
videoFilterOptionsRequested = (
crop != 'none'
or deinterlace != 'none'
or denoise != 'none'
or denoise_strength
or denoise_patch_size
or denoise_chroma_patch_size
or denoise_research_window
or denoise_chroma_research_window
)
if copyVideoEffective and videoFilterOptionsRequested:
ctx.obj['logger'].warning("Ignoring video filter options because video is being copied")
if crop == 'auto' and not copyVideoEffective:
cropKwargs = {}
cf = CropFilter(**cropKwargs)
denoiseKwargs = {}
if denoise_strength:
if denoise_strength and not copyVideoEffective:
denoiseKwargs[NlmeansFilter.STRENGTH_KEY] = denoise_strength
if denoise_patch_size:
if denoise_patch_size and not copyVideoEffective:
denoiseKwargs[NlmeansFilter.PATCH_SIZE_KEY] = denoise_patch_size
if denoise_chroma_patch_size:
if denoise_chroma_patch_size and not copyVideoEffective:
denoiseKwargs[NlmeansFilter.CHROMA_PATCH_SIZE_KEY] = denoise_chroma_patch_size
if denoise_research_window:
if denoise_research_window and not copyVideoEffective:
denoiseKwargs[NlmeansFilter.RESEARCH_WINDOW_KEY] = denoise_research_window
if denoise_chroma_research_window:
if denoise_chroma_research_window and not copyVideoEffective:
denoiseKwargs[NlmeansFilter.CHROMA_RESEARCH_WINDOW_KEY] = denoise_chroma_research_window
if denoise != 'none' or denoiseKwargs:
if not copyVideoEffective and (denoise != 'none' or denoiseKwargs):
NlmeansFilter(**denoiseKwargs)
if deinterlace != 'none':
if deinterlace != 'none' and not copyVideoEffective:
DeinterlaceFilter()
chainYield = list(qf.getChainYield())
@@ -1213,10 +1391,12 @@ def convert(ctx,
sourceMediaDescriptor = mediaFileProperties.getMediaDescriptor()
from ffx.attachment_format import AttachmentFormat
if ([smd for smd in sourceMediaDescriptor.getSubtitleTracks()
if smd.getCodec() == TrackCodec.ASS]
and [amd for amd in sourceMediaDescriptor.getAttachmentTracks()
if amd.getCodec() == TrackCodec.TTF]):
if amd.getAttachmentFormat() == AttachmentFormat.TTF]):
targetFormat = ''
targetExtension = 'mkv'
@@ -1425,18 +1605,30 @@ def convert(ctx,
if rename_only:
shutil.move(sourcePath, targetPath)
else:
fc.runJob(sourcePath,
targetPath,
targetFormat,
chainIteration,
cropArguments,
currentPattern,
currentShowDescriptor)
try:
fc.runJob(sourcePath,
targetPath,
targetFormat,
chainIteration,
cropArguments,
currentPattern,
currentShowDescriptor)
except FfmpegSkipFileWarning:
if os.path.exists(targetPath):
os.remove(targetPath)
continue
endTime = time.perf_counter()
ctx.obj['logger'].info(f"\nDONE\nTime elapsed {endTime - startTime}")
unremediedIssues = getUnremediedIssues(context)
if unremediedIssues:
ctx.obj['logger'].warning("\nFiles with ffmpeg findings that require review:")
for summaryLine in iterUnremediedIssueSummaryLines(context):
ctx.obj['logger'].warning(summaryLine)
else:
ctx.obj['logger'].info("All files converted with no issues.")
if __name__ == '__main__':

View File

@@ -16,6 +16,7 @@ class ConfigurationController():
DATABASE_PATH_CONFIG_KEY = 'databasePath'
LOG_DIRECTORY_CONFIG_KEY = 'logDirectory'
SUBTITLES_DIRECTORY_CONFIG_KEY = 'subtitlesDirectory'
LANGUAGE_CONFIG_KEY = 'language'
OUTPUT_FILENAME_TEMPLATE_KEY = 'outputFilenameTemplate'
DEFAULT_INDEX_SEASON_DIGITS_CONFIG_KEY = 'defaultIndexSeasonDigits'
DEFAULT_INDEX_EPISODE_DIGITS_CONFIG_KEY = 'defaultIndexEpisodeDigits'
@@ -68,6 +69,9 @@ class ConfigurationController():
)
return os.path.expanduser(str(subtitlesDirectory)) if subtitlesDirectory else ''
def getLanguage(self):
return str(self.__configurationData.get(ConfigurationController.LANGUAGE_CONFIG_KEY, '')).strip()
@classmethod
def getConfiguredIntegerValue(cls, configurationData: dict, configKey: str, defaultValue: int) -> int:
configuredValue = configurationData.get(configKey, defaultValue)

80
src/ffx/confirm_screen.py Normal file
View File

@@ -0,0 +1,80 @@
from textual.containers import Grid
from textual.screen import Screen
from textual.widgets import Button, Footer, Header, Static
from .i18n import t
from .screen_support import build_screen_log_pane
class ConfirmScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
CSS = """
Grid {
grid-size: 4 7;
grid-rows: 2 2 2 2 2 2 2;
grid-columns: 1fr 1fr 1fr 1fr;
height: 100%;
width: 100%;
min-width: 80;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
Button {
border: none;
}
.four {
column-span: 4;
}
"""
def __init__(
self,
message: str,
confirm_label: str = "Confirm",
cancel_label: str = "Cancel",
):
super().__init__()
self.__message = str(message)
self.__confirmLabel = str(t(confirm_label))
self.__cancelLabel = str(t(cancel_label))
def compose(self):
yield Header()
with Grid():
# Row 1
yield Static(self.__message, classes="four")
# Row 2
yield Static(" ", classes="four")
# Row 3
yield Button(self.__confirmLabel, id="confirm_button")
yield Button(self.__cancelLabel, id="cancel_button")
yield build_screen_log_pane()
yield Footer()
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
def on_button_pressed(self, event: Button.Pressed) -> None:
if event.button.id == "confirm_button":
self.dismiss(True)
if event.button.id == "cancel_button":
self.dismiss(False)
def action_back(self):
self.dismiss(False)

View File

@@ -1,4 +1,4 @@
VERSION='0.2.4'
VERSION='0.4.2'
DATABASE_VERSION = 3
DEFAULT_QUALITY = 32

View File

@@ -0,0 +1,24 @@
from .base import FfmpegRemedy, FfmpegRemedyDecision, FfmpegSkipFileWarning
from .monitor import FfmpegCommandRunner, FfmpegDiagnosticMonitor
from .retry_with_generated_pts import RetryWithGeneratedPtsRemedy
from .state import (
getDiagnosticsState,
getUnremediedIssues,
iterUnremediedIssueSummaryLines,
recordUnremediedIssue,
)
from .warn_corrupt_mpeg_audio import WarnCorruptMpegAudioRemedy
__all__ = [
"FfmpegCommandRunner",
"FfmpegDiagnosticMonitor",
"FfmpegRemedy",
"FfmpegRemedyDecision",
"FfmpegSkipFileWarning",
"RetryWithGeneratedPtsRemedy",
"WarnCorruptMpegAudioRemedy",
"getDiagnosticsState",
"getUnremediedIssues",
"iterUnremediedIssueSummaryLines",
"recordUnremediedIssue",
]

View File

@@ -0,0 +1,33 @@
from __future__ import annotations
from dataclasses import dataclass
class FfmpegSkipFileWarning(Exception):
pass
@dataclass(frozen=True)
class FfmpegRemedyDecision:
stop_process: bool = False
retry_input_tokens: tuple[str, ...] = ()
skip_file: bool = False
console_warning: str = ""
summary_identifier: str = ""
unremedied_issue_identifier: str = ""
@property
def retry_requested(self) -> bool:
return bool(self.retry_input_tokens)
class FfmpegRemedy:
identifier = "ffmpeg-remedy"
harmless = False
def inspect_line(
self,
line: str,
session: "FfmpegDiagnosticMonitor",
) -> FfmpegRemedyDecision | None:
raise NotImplementedError

View File

@@ -0,0 +1,222 @@
from __future__ import annotations
import re
from ffx.logging_utils import get_ffx_logger
from ffx.process import executeProcess
from .base import FfmpegSkipFileWarning, FfmpegRemedy
from .retry_with_generated_pts import RetryWithGeneratedPtsRemedy
from .state import recordUnremediedIssue
from .warn_corrupt_mpeg_audio import WarnCorruptMpegAudioRemedy
UNHANDLED_DIAGNOSTIC_PATTERNS = (
re.compile(r"\bwarning\b", re.IGNORECASE),
re.compile(r"\berror\b", re.IGNORECASE),
re.compile(r"\bfailed\b", re.IGNORECASE),
re.compile(r"\binvalid\b", re.IGNORECASE),
re.compile(r"\bmissing\b", re.IGNORECASE),
re.compile(r"\bcorrupt\b", re.IGNORECASE),
re.compile(r"\boverflow\b", re.IGNORECASE),
re.compile(r"\bdeprecated\b", re.IGNORECASE),
)
class FfmpegDiagnosticMonitor:
def __init__(
self,
context: dict | None,
command_sequence: list[str],
*,
remedies: list[FfmpegRemedy] | None = None,
emittedWarnings: set[str] | None = None,
):
self.context = context or {}
self.command_sequence = list(command_sequence)
self.logger = self.context.get("logger", get_ffx_logger())
self.source_path = str(self.context.get("current_source_path", "")).strip()
self.remedies = remedies or [
RetryWithGeneratedPtsRemedy(),
WarnCorruptMpegAudioRemedy(),
]
self._emittedWarnings = emittedWarnings if emittedWarnings is not None else set()
self.retry_input_tokens: tuple[str, ...] = ()
self.skip_file = False
self.skip_file_message = ""
def describe_source(self) -> str:
return self.source_path if self.source_path else "current file"
def command_contains_tokens(self, tokens: tuple[str, ...]) -> bool:
tokenCount = len(tokens)
if tokenCount == 0:
return True
return any(
tuple(self.command_sequence[index:index + tokenCount]) == tuple(tokens)
for index in range(len(self.command_sequence) - tokenCount + 1)
)
def emitConsoleWarning(self, warningMessage: str) -> None:
if warningMessage and warningMessage not in self._emittedWarnings:
self.logger.warning(warningMessage)
self._emittedWarnings.add(warningMessage)
def recordUnremediedIssue(self, issueIdentifier: str, issueLine: str) -> None:
isFirstIssueForFile = recordUnremediedIssue(
self.context,
self.describe_source(),
issueIdentifier,
)
if not isFirstIssueForFile:
return
self.emitConsoleWarning(
f"ffmpeg reported a diagnostic with no automatic remedy while converting "
+ f"{self.describe_source()}. FFX will continue, but review the output "
+ f"file. First unhandled line: {issueLine}"
)
def lineLooksLikeUnhandledDiagnostic(self, line: str) -> bool:
return any(pattern.search(line) for pattern in UNHANDLED_DIAGNOSTIC_PATTERNS)
def getUnhandledDiagnosticIdentifier(self, line: str) -> str:
loweredLine = str(line).lower()
if any(token in loweredLine for token in ("error", "failed", "invalid", "missing", "corrupt", "overflow")):
return "unhandled-error"
if any(token in loweredLine for token in ("warning", "deprecated")):
return "unhandled-warning"
return "unhandled-diagnostic"
def getSummaryIdentifier(
self,
remedy: FfmpegRemedy,
decision,
) -> str:
explicitIdentifier = str(decision.summary_identifier).strip()
if explicitIdentifier:
return explicitIdentifier
remedyIdentifier = str(getattr(remedy, "identifier", "")).strip()
if remedyIdentifier and remedyIdentifier != FfmpegRemedy.identifier:
return remedyIdentifier
return str(decision.unremedied_issue_identifier).strip()
def shouldRecordSummary(
self,
remedy: FfmpegRemedy,
decision,
) -> bool:
if getattr(remedy, "harmless", False):
return False
if decision.retry_requested and not decision.skip_file:
return False
return bool(self.getSummaryIdentifier(remedy, decision))
def handle_stderr_line(self, line: str) -> bool:
strippedLine = str(line).strip()
if not strippedLine:
return False
for remedy in self.remedies:
decision = remedy.inspect_line(strippedLine, self)
if decision is None:
continue
self.emitConsoleWarning(decision.console_warning)
if decision.retry_requested:
self.retry_input_tokens = tuple(decision.retry_input_tokens)
if self.shouldRecordSummary(remedy, decision):
recordUnremediedIssue(
self.context,
self.describe_source(),
self.getSummaryIdentifier(remedy, decision),
)
if decision.skip_file:
self.skip_file = True
self.skip_file_message = (
decision.console_warning
or f"Skipping file {self.describe_source()} because ffmpeg reported a fatal diagnostic."
)
return bool(decision.stop_process)
if self.lineLooksLikeUnhandledDiagnostic(strippedLine):
self.recordUnremediedIssue(
self.getUnhandledDiagnosticIdentifier(strippedLine),
strippedLine,
)
return False
@property
def retry_requested(self) -> bool:
return bool(self.retry_input_tokens)
def insertFfmpegInputOptions(
commandSequence: list[str],
extraTokens: tuple[str, ...],
) -> list[str]:
if not extraTokens:
return list(commandSequence)
if not commandSequence:
return list(extraTokens)
return [commandSequence[0]] + list(extraTokens) + list(commandSequence[1:])
class FfmpegCommandRunner:
def __init__(
self,
context: dict | None,
*,
remedies: list[FfmpegRemedy] | None = None,
):
self.__context = context or {}
self.__remedies = remedies
def execute(
self,
commandSequence: list[str],
*,
directory: str = None,
timeoutSeconds: float = None,
):
emittedWarnings: set[str] = set()
attemptCommandSequence = list(commandSequence)
while True:
monitor = FfmpegDiagnosticMonitor(
self.__context,
attemptCommandSequence,
remedies=self.__remedies,
emittedWarnings=emittedWarnings,
)
out, err, rc = executeProcess(
attemptCommandSequence,
directory=directory,
context=self.__context,
timeoutSeconds=timeoutSeconds,
stderrLineHandler=monitor.handle_stderr_line,
)
if monitor.retry_requested:
attemptCommandSequence = insertFfmpegInputOptions(
attemptCommandSequence,
monitor.retry_input_tokens,
)
continue
if monitor.skip_file:
raise FfmpegSkipFileWarning(monitor.skip_file_message)
return out, err, rc

View File

@@ -0,0 +1,41 @@
from __future__ import annotations
import re
from .base import FfmpegRemedy, FfmpegRemedyDecision
class RetryWithGeneratedPtsRemedy(FfmpegRemedy):
identifier = "retry-with-generated-pts"
RETRY_INPUT_TOKENS = ("-fflags", "+genpts")
TIMESTAMP_UNSET_PATTERN = re.compile(
r"Timestamps are unset in a packet for stream \d+"
)
def inspect_line(
self,
line: str,
session: "FfmpegDiagnosticMonitor",
) -> FfmpegRemedyDecision | None:
if self.TIMESTAMP_UNSET_PATTERN.search(line) is None:
return None
if session.command_contains_tokens(self.RETRY_INPUT_TOKENS):
return FfmpegRemedyDecision(
stop_process=True,
skip_file=True,
console_warning=(
f"Skipping file {session.describe_source()}: ffmpeg still reported "
+ "unset packet timestamps after retry with -fflags +genpts."
),
unremedied_issue_identifier="timestamp-unset-after-genpts",
)
return FfmpegRemedyDecision(
stop_process=True,
retry_input_tokens=self.RETRY_INPUT_TOKENS,
console_warning=(
f"ffmpeg reported unset packet timestamps for {session.describe_source()}. "
+ "Stopping early and retrying with -fflags +genpts."
),
)

View File

@@ -0,0 +1,53 @@
from __future__ import annotations
import os
DIAGNOSTICS_STATE_KEY = "diagnostics_state"
UNREMEDIED_ISSUES_KEY = "unremedied_issues"
def getDiagnosticsState(context: dict | None) -> dict:
if context is None:
return {UNREMEDIED_ISSUES_KEY: {}}
if DIAGNOSTICS_STATE_KEY not in context:
context[DIAGNOSTICS_STATE_KEY] = {
UNREMEDIED_ISSUES_KEY: {},
}
return context[DIAGNOSTICS_STATE_KEY]
def recordUnremediedIssue(
context: dict | None,
sourcePath: str,
identifier: str,
) -> bool:
if not sourcePath:
return False
diagnosticsState = getDiagnosticsState(context)
unremediedIssues = diagnosticsState[UNREMEDIED_ISSUES_KEY]
issueList = unremediedIssues.setdefault(sourcePath, [])
strippedIdentifier = str(identifier).strip()
if not strippedIdentifier or strippedIdentifier in issueList:
return False
issueList.append(strippedIdentifier)
return True
def getUnremediedIssues(context: dict | None) -> dict[str, list[str]]:
diagnosticsState = getDiagnosticsState(context)
return diagnosticsState.get(UNREMEDIED_ISSUES_KEY, {})
def iterUnremediedIssueSummaryLines(context: dict | None) -> list[str]:
summaryLines = []
unremediedIssues = getUnremediedIssues(context)
for sourcePath in sorted(unremediedIssues.keys()):
identifiers = unremediedIssues[sourcePath]
summaryLines.append(f"{os.path.basename(sourcePath)}: {', '.join(identifiers)}")
return summaryLines

View File

@@ -0,0 +1,35 @@
from __future__ import annotations
import re
from .base import FfmpegRemedy, FfmpegRemedyDecision
class WarnCorruptMpegAudioRemedy(FfmpegRemedy):
identifier = "warn-corrupt-mpeg-audio"
PATTERNS = (
re.compile(r"\[mp3float @ .*\] invalid block type", re.IGNORECASE),
re.compile(r"\[mp3float @ .*\] invalid new backstep -?\d+", re.IGNORECASE),
re.compile(r"\[mp3float @ .*\] Header missing"),
re.compile(r"\[mp3float @ .*\] overread, skip ", re.IGNORECASE),
re.compile(r"Error while decoding MPEG audio frame\."),
re.compile(
r"Error submitting packet to decoder: Invalid data found when processing input"
),
)
def inspect_line(
self,
line: str,
session: "FfmpegDiagnosticMonitor",
) -> FfmpegRemedyDecision | None:
if not any(pattern.search(line) for pattern in self.PATTERNS):
return None
return FfmpegRemedyDecision(
console_warning=(
f"ffmpeg reported damaged MPEG audio frames while converting "
+ f"{session.describe_source()}. FFX will continue, but the output "
+ "audio may contain gaps or glitches."
),
)

View File

@@ -0,0 +1,27 @@
from .diagnostics import (
FfmpegCommandRunner,
FfmpegDiagnosticMonitor,
FfmpegRemedy,
FfmpegRemedyDecision,
FfmpegSkipFileWarning,
RetryWithGeneratedPtsRemedy,
WarnCorruptMpegAudioRemedy,
getDiagnosticsState,
getUnremediedIssues,
iterUnremediedIssueSummaryLines,
recordUnremediedIssue,
)
__all__ = [
"FfmpegCommandRunner",
"FfmpegDiagnosticMonitor",
"FfmpegRemedy",
"FfmpegRemedyDecision",
"FfmpegSkipFileWarning",
"RetryWithGeneratedPtsRemedy",
"WarnCorruptMpegAudioRemedy",
"getDiagnosticsState",
"getUnremediedIssues",
"iterUnremediedIssueSummaryLines",
"recordUnremediedIssue",
]

View File

@@ -1,7 +1,10 @@
from textual.app import App
from .i18n import set_current_language, t
from .shows_screen import ShowsScreen
from .media_details_screen import MediaDetailsScreen
from .inspect_details_screen import InspectDetailsScreen
from .media_edit_screen import MediaEditScreen
from .screen_support import configure_screen_log_handler, set_screen_log_pane_enabled
class FfxApp(App):
@@ -9,8 +12,8 @@ class FfxApp(App):
TITLE = "FFX"
BINDINGS = [
("q", "quit()", "Quit"),
("h", "switch_mode('help')", "Help"),
("q", "quit()", t("Quit")),
("h", "switch_mode('help')", t("Help")),
]
@@ -19,6 +22,14 @@ class FfxApp(App):
# Data 'input' variable
self.context = context
set_current_language(self.context.get("language"))
debug_mode = bool(self.context.get("debug", False))
set_screen_log_pane_enabled(debug_mode)
configure_screen_log_handler(
self.context.get("logger"),
self,
enabled=debug_mode,
)
def on_mount(self) -> None:
@@ -29,10 +40,12 @@ class FfxApp(App):
self.push_screen(ShowsScreen())
if self.context['command'] == 'inspect':
self.push_screen(MediaDetailsScreen())
self.push_screen(InspectDetailsScreen())
if self.context['command'] == 'edit':
self.push_screen(MediaEditScreen())
def getContext(self):
"""Data 'output' method"""
return self.context

View File

@@ -1,7 +1,9 @@
import os, click
import os, click, subprocess
from functools import lru_cache
from logging import Logger
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
from ffx.diagnostics import FfmpegCommandRunner
from ffx.media_descriptor import MediaDescriptor
from ffx.audio_layout import AudioLayout
@@ -61,10 +63,52 @@ class FfxController():
sourceMediaDescriptor)
self.__logger: Logger = context['logger']
self.__warnedH264Fallback = False
self.__ffmpegCommandRunner = FfmpegCommandRunner(context)
@staticmethod
@lru_cache(maxsize=None)
def isFfmpegEncoderAvailable(encoderName: str) -> bool:
completed = subprocess.run(
["ffmpeg", "-encoders"],
capture_output=True,
text=True,
check=False,
)
if completed.returncode != 0:
return False
resolvedEncoderName = str(encoderName).strip()
for line in completed.stdout.splitlines():
if not line.startswith(" "):
continue
tokens = line.split(maxsplit=2)
if len(tokens) >= 2 and tokens[1] == resolvedEncoderName:
return True
return False
@classmethod
def getSupportedSoftwareH264Encoder(cls) -> str | None:
if cls.isFfmpegEncoderAvailable("libx264"):
return "libx264"
if cls.isFfmpegEncoderAvailable("libopenh264"):
return "libopenh264"
return None
def executeCommandSequence(self, commandSequence):
out, err, rc = executeProcess(commandSequence, context=self.__context)
if commandSequence and str(commandSequence[0]).strip() == "ffmpeg":
out, err, rc = self.__ffmpegCommandRunner.execute(
commandSequence,
timeoutSeconds=None,
)
else:
out, err, rc = executeProcess(commandSequence, context=self.__context)
if rc:
raise click.ClickException(f"Command resulted in error: rc={rc} error={err}")
return out, err, rc
@@ -79,10 +123,27 @@ class FfxController():
# -c:v libx264 -preset slow -crf 17
def generateH264Tokens(self, quality, subIndex : int = 0):
h264Encoder = self.getSupportedSoftwareH264Encoder()
return [f"-c:v:{int(subIndex)}", 'libx264',
"-preset", "slow",
'-crf', str(quality)]
if h264Encoder == "libx264":
return [f"-c:v:{int(subIndex)}", 'libx264',
"-preset", "slow",
'-crf', str(quality)]
if h264Encoder == "libopenh264":
if not self.__warnedH264Fallback:
self.__logger.warning(
"libx264 encoder unavailable; falling back to libopenh264 for H.264 encoding."
)
self.__warnedH264Fallback = True
return [f"-c:v:{int(subIndex)}", 'libopenh264',
'-pix_fmt', 'yuv420p']
raise click.ClickException(
"H.264 encoding requested but no supported software H.264 encoder is available. "
+ "Tried libx264 and libopenh264."
)
# -c:v:0 libvpx-vp9 -row-mt 1 -crf 32 -pass 1 -speed 4 -frame-parallel 0 -g 9999 -aq-mode 0
@@ -119,6 +180,16 @@ class FfxController():
def generateAudioCopyTokens(self, subIndex):
return [f"-c:a:{int(subIndex)}", 'copy']
def generateVideoCopyAllTokens(self):
if self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
return ["-c:v", "copy"]
return []
def generateAudioCopyAllTokens(self):
if self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.AUDIO):
return ["-c:a", "copy"]
return []
def generateSubtitleCopyTokens(self, subIndex):
return [f"-c:s:{int(subIndex)}", 'copy']
@@ -239,6 +310,12 @@ class FfxController():
return audioTokens
def generateAudioProcessingTokens(self):
if self.__context.get('copy_audio', False):
return self.generateAudioCopyAllTokens()
return self.generateAudioEncodingTokens()
def runJob(self,
sourcePath,
targetPath,
@@ -252,6 +329,8 @@ class FfxController():
videoEncoder: VideoEncoder = self.__context.get('video_encoder', VideoEncoder.VP9)
self.__context['current_source_path'] = sourcePath
copyVideo = self.__context.get('copy_video', False) or videoEncoder == VideoEncoder.COPY
qualityFilters = [fy for fy in chainIteration if fy['identifier'] == 'quality']
@@ -262,30 +341,35 @@ class FfxController():
deinterlaceFilters = [fy for fy in chainIteration if fy['identifier'] == 'bwdif']
if qualityFilters and (quality := qualityFilters[0]['parameters']['quality']):
self.__logger.info(f"Setting quality {quality} from command line")
elif currentPattern is not None and (quality := currentPattern.quality):
self.__logger.info(f"Setting quality {quality} from pattern")
elif currentShowDescriptor is not None and (quality := currentShowDescriptor.getQuality()):
self.__logger.info(f"Setting quality {quality} from show")
if copyVideo:
quality = None
self.__context['encoding_metadata_tags'] = {}
else:
quality = (QualityFilter.DEFAULT_H264_QUALITY
if (videoEncoder == VideoEncoder.H264)
else QualityFilter.DEFAULT_VP9_QUALITY)
self.__logger.info(f"Setting quality {quality} from default")
if qualityFilters and (quality := qualityFilters[0]['parameters']['quality']):
self.__logger.info(f"Setting quality {quality} from command line")
elif currentPattern is not None and (quality := currentPattern.quality):
self.__logger.info(f"Setting quality {quality} from pattern")
elif currentShowDescriptor is not None and (quality := currentShowDescriptor.getQuality()):
self.__logger.info(f"Setting quality {quality} from show")
else:
quality = (QualityFilter.DEFAULT_H264_QUALITY
if (videoEncoder == VideoEncoder.H264)
else QualityFilter.DEFAULT_VP9_QUALITY)
self.__logger.info(f"Setting quality {quality} from default")
preset = presetFilters[0]['parameters']['preset'] if presetFilters else PresetFilter.DEFAULT_PRESET
self.__context['encoding_metadata_tags'] = self.generateEncodingMetadataTags(
videoEncoder,
quality,
preset,
)
if not copyVideo:
self.__context['encoding_metadata_tags'] = self.generateEncodingMetadataTags(
videoEncoder,
quality,
preset,
)
filterParamTokens = []
if cropArguments:
if cropArguments and not copyVideo:
cropParams = (f"crop="
+ f"{cropArguments[CropFilter.OUTPUT_WIDTH_KEY]}"
@@ -295,8 +379,9 @@ class FfxController():
filterParamTokens.append(cropParams)
filterParamTokens.extend(denoiseFilters[0]['tokens'] if denoiseFilters else [])
filterParamTokens.extend(deinterlaceFilters[0]['tokens'] if deinterlaceFilters else [])
if not copyVideo:
filterParamTokens.extend(denoiseFilters[0]['tokens'] if denoiseFilters else [])
filterParamTokens.extend(deinterlaceFilters[0]['tokens'] if deinterlaceFilters else [])
deinterlaceFilters
@@ -327,6 +412,29 @@ class FfxController():
self.executeCommandSequence(commandSequence)
return
if copyVideo:
commandSequence = (commandTokens
+ self.__targetMediaDescriptor.getImportFileTokens()
+ self.__targetMediaDescriptor.getInputMappingTokens(sourceMediaDescriptor = self.__sourceMediaDescriptor)
+ self.__mdcs.generateDispositionTokens())
commandSequence += self.__mdcs.generateMetadataTokens()
commandSequence += self.generateVideoCopyAllTokens()
commandSequence += self.generateAudioProcessingTokens()
if self.__context['perform_cut']:
commandSequence += self.generateCropTokens()
commandSequence += self.generateOutputTokens(targetPath,
targetFormat)
self.__logger.debug("FfxController.runJob(): Running command sequence")
if not self.__context['dry_run']:
self.executeCommandSequence(commandSequence)
return
if videoEncoder == VideoEncoder.AV1:
commandSequence = (commandTokens
@@ -343,7 +451,7 @@ class FfxController():
if td.getCodec != TrackCodec.PNG:
commandSequence += self.generateAV1Tokens(int(quality), int(preset))
commandSequence += self.generateAudioEncodingTokens()
commandSequence += self.generateAudioProcessingTokens()
if self.__context['perform_cut']:
commandSequence += self.generateCropTokens()
@@ -373,7 +481,7 @@ class FfxController():
if td.getCodec != TrackCodec.PNG:
commandSequence += self.generateH264Tokens(int(quality))
commandSequence += self.generateAudioEncodingTokens()
commandSequence += self.generateAudioProcessingTokens()
if self.__context['perform_cut']:
commandSequence += self.generateCropTokens()
@@ -432,7 +540,7 @@ class FfxController():
if td.getCodec != TrackCodec.PNG:
commandSequence2 += self.generateVP9Pass2Tokens(int(quality))
commandSequence2 += self.generateAudioEncodingTokens()
commandSequence2 += self.generateAudioProcessingTokens()
if self.__context['perform_cut']:
commandSequence2 += self.generateCropTokens()

View File

@@ -63,11 +63,19 @@ class FileProperties():
self.__sourceFileBasename = self.__sourceFilename
self.__sourceFilenameExtension = ''
self.__pc = PatternController(context)
self.__usePattern = bool(self.context.get('use_pattern', True))
self.__pc = (
PatternController(context)
if self.__usePattern and 'database' in self.context
else None
)
# Checking if database contains matching pattern
matchResult = self.__pc.matchFilename(self.__sourceFilename) if self.__usePattern else {}
matchResult = (
self.__pc.matchFilename(self.__sourceFilename)
if self.__pc is not None
else {}
)
self.__logger.debug(f"FileProperties.__init__(): Match result: {matchResult}")

View File

@@ -2,12 +2,30 @@ from textual.app import ComposeResult
from textual.screen import Screen
from textual.widgets import Footer, Placeholder
from .i18n import t
from .screen_support import build_screen_log_pane, go_back_or_exit
class HelpScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
def __init__(self):
super().__init__()
context = self.app.getContext()
def compose(self) -> ComposeResult:
yield Placeholder("Help Screen")
# Row 1
yield Placeholder(t("Help Screen"))
yield build_screen_log_pane()
yield Footer()
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
def action_back(self):
go_back_or_exit(self)

View File

@@ -6,12 +6,23 @@ from .configuration_controller import ConfigurationController
from .logging_utils import get_ffx_logger
from .show_descriptor import ShowDescriptor
from enum import Enum
class EmptyStringUndefined(Undefined):
def __str__(self):
return ''
class LogLevel(Enum):
DEBUG = 'debug'
INFO = 'info'
WARNING = 'warning'
ERROR = 'error'
CRITICAL = 'critical'
DIFF_ADDED_KEY = 'added'
DIFF_REMOVED_KEY = 'removed'
DIFF_CHANGED_KEY = 'changed'
@@ -119,7 +130,7 @@ def setDiff(a : set, b : set) -> set:
def permutateList(inputList: list, permutation: list):
# 0,1,2: ABC
# 0,2,1: ACB
# 0,2,1: ACBffmpeg:
# 1,2,0: BCA
pass

158
src/ffx/i18n.py Normal file
View File

@@ -0,0 +1,158 @@
from __future__ import annotations
import json
import os
from pathlib import Path
DEFAULT_LANGUAGE = "de"
SOURCE_LANGUAGE = "en"
SUPPORTED_LANGUAGES = {
"de": "Deutsch",
"en": "English",
"fr": "Français",
"ja": "日本語",
"nb": "Norsk bokmål",
"eo": "Esperanto",
"ta": "தமிழ்",
"pt": "Português",
"es": "Español",
}
LANGUAGE_ALIASES = {
"deu": "de",
"ger": "de",
"english": "en",
"eng": "en",
"fra": "fr",
"fre": "fr",
"french": "fr",
"jpn": "ja",
"japanese": "ja",
"nor": "nb",
"nob": "nb",
"no": "nb",
"nn": "nb",
"bokmal": "nb",
"norwegian": "nb",
"epo": "eo",
"esperanto": "eo",
"tam": "ta",
"tamil": "ta",
"por": "pt",
"portuguese": "pt",
"spa": "es",
"spanish": "es",
}
_catalog_cache: dict[str, dict] = {}
_current_language = DEFAULT_LANGUAGE
def _assets_directory() -> Path:
return Path(__file__).resolve().parents[2] / "assets" / "i18n"
def normalize_language_code(value: str | None) -> str | None:
if value is None:
return None
normalized = str(value).strip().replace("-", "_")
if not normalized:
return None
base_language = normalized.split(".")[0].split("_")[0].lower()
if base_language in SUPPORTED_LANGUAGES:
return base_language
return LANGUAGE_ALIASES.get(base_language)
def detect_system_language(env: dict[str, str] | None = None) -> str | None:
environment = env or os.environ
for key in ("LC_ALL", "LC_MESSAGES", "LANG"):
if language_code := normalize_language_code(environment.get(key)):
return language_code
return None
def get_default_config_path(home_directory: str | None = None) -> Path:
base_home = Path(home_directory or os.path.expanduser("~"))
return base_home / ".local" / "etc" / "ffx.json"
def read_configured_language(
config_path: str | os.PathLike | None = None,
*,
home_directory: str | None = None,
) -> str | None:
resolved_path = Path(config_path) if config_path is not None else get_default_config_path(home_directory)
if not resolved_path.is_file():
return None
try:
config_data = json.loads(resolved_path.read_text(encoding="utf-8"))
except (OSError, ValueError, TypeError):
return None
return normalize_language_code(config_data.get("language"))
def resolve_application_language(
*,
cli_language: str | None = None,
config_language: str | None = None,
system_language: str | None = None,
env: dict[str, str] | None = None,
) -> str:
for candidate in (
cli_language,
config_language,
system_language or detect_system_language(env),
):
if normalized := normalize_language_code(candidate):
return normalized
return DEFAULT_LANGUAGE
def set_current_language(language_code: str | None) -> str:
global _current_language
_current_language = normalize_language_code(language_code) or DEFAULT_LANGUAGE
return _current_language
def get_current_language() -> str:
return _current_language
def _load_catalog(language_code: str) -> dict:
normalized = normalize_language_code(language_code) or DEFAULT_LANGUAGE
if normalized not in _catalog_cache:
catalog_path = _assets_directory() / f"{normalized}.json"
if catalog_path.is_file():
_catalog_cache[normalized] = json.loads(catalog_path.read_text(encoding="utf-8"))
else:
_catalog_cache[normalized] = {"phrases": {}, "iso_languages": {}}
return _catalog_cache[normalized]
def _lookup_phrase(language_code: str, source_text: str) -> str | None:
phrases = _load_catalog(language_code).get("phrases", {})
return phrases.get(source_text)
def t(source_text: str, **kwargs) -> str:
translated = (
_lookup_phrase(get_current_language(), source_text)
or _lookup_phrase(SOURCE_LANGUAGE, source_text)
or source_text
)
return translated.format(**kwargs) if kwargs else translated
def translate_iso_language(member_name: str, fallback: str) -> str:
for language_code in (get_current_language(), SOURCE_LANGUAGE):
translations = _load_catalog(language_code).get("iso_languages", {})
if member_name in translations:
return str(translations[member_name])
return str(fallback)

View File

@@ -0,0 +1,603 @@
import re
import click
from rich.text import Text
from textual.containers import Grid
from textual.widgets import Button, Footer, Header, Input, Static
from textual.widgets._data_table import CellDoesNotExist
from ffx.file_properties import FileProperties
from ffx.helper import DIFF_ADDED_KEY, DIFF_CHANGED_KEY, DIFF_REMOVED_KEY
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
from ffx.show_descriptor import ShowDescriptor
from ffx.track_descriptor import TrackDescriptor
from .i18n import t
from .media_workflow_screen_base import MediaWorkflowScreenBase
from .pattern_details_screen import PatternDetailsScreen
from .screen_support import (
add_auto_table_column,
build_screen_controllers,
build_screen_log_pane,
go_back_or_exit,
localized_column_width,
update_table_column_label,
)
from .show_details_screen import ShowDetailsScreen
class InspectDetailsScreen(MediaWorkflowScreenBase):
GRID_COLUMN_LABEL_MIN = 12
GRID_COLUMN_2 = 20
GRID_COLUMN_3 = 40
GRID_COLUMN_4 = "4fr"
GRID_COLUMN_5 = 10
GRID_COLUMN_6 = "5fr"
CSS = f"""
Grid {{
grid-size: 6 8;
grid-rows: 9 2 2 2 2 10 2 10;
grid-columns: {GRID_COLUMN_LABEL_MIN} {GRID_COLUMN_2} {GRID_COLUMN_3} {GRID_COLUMN_4} {GRID_COLUMN_5} {GRID_COLUMN_6};
height: 100%;
width: 100%;
min-width: 120;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}}
DataTable .datatable--cursor {{
background: darkorange;
color: black;
}}
DataTable .datatable--header {{
background: steelblue;
color: white;
}}
Input {{
border: none;
}}
Button {{
border: none;
}}
DataTable {{
min-height: 24;
width: 100%;
}}
.two {{
column-span: 2;
}}
.three {{
column-span: 3;
}}
.four {{
column-span: 4;
}}
.five {{
column-span: 5;
}}
#differences-table {{
row-span: 10;
}}
.yellow {{
tint: yellow 40%;
}}
"""
@classmethod
def _grid_columns_spec(cls, label_column_width: int | None = None) -> str:
return " ".join(
[
str(
cls.GRID_COLUMN_LABEL_MIN
if label_column_width is None
else int(label_column_width)
),
str(cls.GRID_COLUMN_2),
str(cls.GRID_COLUMN_3),
str(cls.GRID_COLUMN_4),
str(cls.GRID_COLUMN_5),
str(cls.GRID_COLUMN_6),
]
)
COMMAND_NAME = "inspect"
DIFFERENCES_COLUMN_LABEL = "Differences (file->db/output)"
BINDINGS = [
("escape", "back", t("Back")),
("q", "app.quit", t("Quit")),
("n", "new_pattern", t("New Pattern")),
("u", "update_pattern", t("Update Pattern")),
("e", "edit_pattern", t("Edit Pattern")),
]
def __init__(self):
self._showRowData: dict[object, ShowDescriptor | None] = {}
self._showSortColumnKey = None
self._showSortReverse = False
self._showColumnLabels: dict[object, str] = {}
super().__init__()
controllers = build_screen_controllers(
self.context,
pattern=True,
show=True,
track=True,
tag=True,
)
self._pc = controllers["pattern"]
self._sc = controllers["show"]
self._tc = controllers["track"]
self._tac = controllers["tag"]
self.reloadProperties(reset_draft=True)
def compose(self):
self._build_media_tags_table()
self._build_tracks_table()
self._build_differences_table()
yield Header()
with Grid(id="main_grid"):
self.showsTable = self._build_shows_table()
# Row 1
yield Static(t("Show"))
yield self.showsTable
yield Static(" ")
yield self.differencesTable
# Row 2
yield Static(" ", classes="five")
# Row 3
yield Static(" ")
yield Button(t("Substitute"), id="pattern_button")
yield Static(" ", classes="three")
# Row 4
yield Static(t("Pattern"))
yield Input(type="text", id="pattern_input", classes="three")
yield Static(" ")
# Row 5
yield Static(" ", classes="five")
# Row 6
yield Static(t("Media Tags"))
yield self.mediaTagsTable
yield Static(" ")
# Row 7
yield Static(" ", classes="five")
# Row 8
yield Static(t("Streams"))
yield self.tracksTable
yield Static(" ")
yield build_screen_log_pane()
yield Footer()
def _update_grid_layout(self) -> None:
leftColumnWidth = max(
localized_column_width(t("Show"), self.GRID_COLUMN_LABEL_MIN),
localized_column_width(t("Pattern"), self.GRID_COLUMN_LABEL_MIN),
localized_column_width(t("Media Tags"), self.GRID_COLUMN_LABEL_MIN),
localized_column_width(t("Streams"), self.GRID_COLUMN_LABEL_MIN),
)
grid = self.query_one("#main_grid", Grid)
grid.styles.grid_columns = self._grid_columns_spec(leftColumnWidth)
def action_back(self):
go_back_or_exit(self)
def getDisplayedMediaDescriptor(self):
if self._currentPattern is not None and self._targetMediaDescriptor is not None:
return self._targetMediaDescriptor
return self._sourceMediaDescriptor
def getTrackEditSourceDescriptor(self):
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
if (
selectedTrackDescriptor is None
or self._currentPattern is None
or self._targetMediaDescriptor is None
):
return selectedTrackDescriptor
for sourceTrackDescriptor in self._sourceMediaDescriptor.getTrackDescriptors():
if (
sourceTrackDescriptor.getSourceIndex()
== selectedTrackDescriptor.getSourceIndex()
and sourceTrackDescriptor.getType() == selectedTrackDescriptor.getType()
):
return sourceTrackDescriptor
return None
def _build_shows_table(self):
from textual.widgets import DataTable
showsTable = DataTable(classes="three")
idLabel = t("ID")
nameLabel = t("Name")
yearLabel = t("Year")
self._showColumnKeyId = add_auto_table_column(showsTable, idLabel)
self._showColumnKeyName = add_auto_table_column(showsTable, nameLabel)
self._showColumnKeyYear = add_auto_table_column(showsTable, yearLabel)
self._showColumnLabels = {
self._showColumnKeyId: idLabel,
self._showColumnKeyName: nameLabel,
self._showColumnKeyYear: yearLabel,
}
showsTable.cursor_type = "row"
return showsTable
def _get_selected_show_row_key(self):
try:
row_key, _ = self.showsTable.coordinate_to_cell_key(
self.showsTable.cursor_coordinate
)
return row_key
except CellDoesNotExist:
return None
def _move_show_cursor_to_row_key(self, row_key):
if row_key is None:
return
try:
row_index = int(self.showsTable.get_row_index(row_key))
except Exception:
return
self.showsTable.move_cursor(row=row_index)
def _sort_key_for_show_column(self, column_key):
if column_key == self._showColumnKeyId:
return lambda value: int(value) if str(value).strip().isdigit() else -1
if column_key == self._showColumnKeyYear:
return lambda value: int(value) if str(value).strip().isdigit() else -1
if column_key == self._showColumnKeyName:
return lambda value: str(value).casefold()
return None
def _update_show_header_labels(self):
if not hasattr(self, "showsTable"):
return
arrow_up = ""
arrow_down = ""
for column_key, base_label in self._showColumnLabels.items():
column = self.showsTable.columns.get(column_key)
if column is None:
continue
label_text = base_label
if column_key == self._showSortColumnKey:
label_text = (
f"{base_label} "
f"{arrow_down if self._showSortReverse else arrow_up}"
)
update_table_column_label(self.showsTable, column_key, Text(label_text))
def _apply_show_sort(self, *, preserve_row_key=None):
if self._showSortColumnKey is None:
self._update_show_header_labels()
return
self.showsTable.sort(
self._showSortColumnKey,
key=self._sort_key_for_show_column(self._showSortColumnKey),
reverse=self._showSortReverse,
)
self._move_show_cursor_to_row_key(preserve_row_key)
self._update_show_header_labels()
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
self._update_grid_layout()
if self._currentPattern is None:
self._add_show_row(None)
for show in self._sc.getAllShows():
self._add_show_row(show.getDescriptor(self.context))
self._showSortColumnKey = self._showColumnKeyName
self._apply_show_sort()
if self._currentPattern is not None:
showIdentifier = self._currentPattern.getShowId()
showRowIndex = self.getRowIndexFromShowId(showIdentifier)
if showRowIndex is not None:
self.showsTable.move_cursor(row=showRowIndex)
self.query_one("#pattern_input", Input).value = self._currentPattern.getPattern()
else:
self.query_one("#pattern_input", Input).value = self._mediaFilename
self.highlightPattern(True)
self.updateMediaTags()
self.updateTracks()
self.updateDifferences()
def on_button_pressed(self, event: Button.Pressed) -> None:
if event.button.id == "pattern_button":
pattern = self.query_one("#pattern_input", Input).value
patternMatch = re.search(FileProperties.SE_INDICATOR_PATTERN, pattern)
if patternMatch:
self.query_one("#pattern_input", Input).value = pattern.replace(
patternMatch.group(1),
FileProperties.SE_INDICATOR_PATTERN,
)
if event.button.id == "select_default_button":
if self.setSelectedTrackDefault():
self.updateTracks()
self.updateDifferences()
if event.button.id == "select_forced_button":
if self.setSelectedTrackForced():
self.updateTracks()
self.updateDifferences()
def on_data_table_header_selected(self, event) -> None:
if event.data_table is not self.showsTable:
return
selected_row_key = self._get_selected_show_row_key()
if self._showSortColumnKey == event.column_key:
self._showSortReverse = not self._showSortReverse
else:
self._showSortColumnKey = event.column_key
self._showSortReverse = False
self._apply_show_sort(preserve_row_key=selected_row_key)
def removeShow(self, showId: int = -1):
for row_key, show_descriptor in list(self._showRowData.items()):
if (
(showId == -1 and show_descriptor is None)
or (
show_descriptor is not None
and show_descriptor.getId() == showId
)
):
self.showsTable.remove_row(row_key)
self._showRowData.pop(row_key, None)
return
def getRowIndexFromShowId(self, showId: int = -1) -> int | None:
for row_key, show_descriptor in self._showRowData.items():
if (
(showId == -1 and show_descriptor is None)
or (
show_descriptor is not None
and show_descriptor.getId() == showId
)
):
return int(self.showsTable.get_row_index(row_key))
return None
def _add_show_row(self, show_descriptor: ShowDescriptor | None):
if show_descriptor is None:
row_key = self.showsTable.add_row(" ", t("<New show>"), " ")
else:
row_key = self.showsTable.add_row(
str(show_descriptor.getId()),
str(show_descriptor.getName()),
str(show_descriptor.getYear()),
)
self._showRowData[row_key] = show_descriptor
return row_key
def highlightPattern(self, state: bool):
patternInput = self.query_one("#pattern_input", Input)
patternInput.styles.background = "red" if state else None
def getSelectedShowDescriptor(self) -> ShowDescriptor | None:
try:
row_key, _ = self.showsTable.coordinate_to_cell_key(
self.showsTable.cursor_coordinate
)
if row_key is not None:
return self._showRowData.get(row_key)
except (CellDoesNotExist, AttributeError):
return None
return None
def getPatternObjFromInput(self):
patternObj = {}
try:
patternObj["show_id"] = self.getSelectedShowDescriptor().getId()
patternObj["pattern"] = str(self.query_one("#pattern_input", Input).value)
except Exception:
return {}
return patternObj
def handle_new_pattern(self, showDescriptor: ShowDescriptor):
if type(showDescriptor) is not ShowDescriptor:
raise TypeError(
"InspectDetailsScreen.handle_new_pattern(): Argument 'showDescriptor' has to be of type ShowDescriptor"
)
self.removeShow()
showRowIndex = self.getRowIndexFromShowId(showDescriptor.getId())
if showRowIndex is None:
row_key = self._add_show_row(showDescriptor)
self._apply_show_sort(preserve_row_key=row_key)
showRowIndex = self.getRowIndexFromShowId(showDescriptor.getId())
if showRowIndex is not None:
self.showsTable.move_cursor(row=showRowIndex)
patternObj = self.getPatternObjFromInput()
if patternObj:
mediaTags = {}
for tagKey, tagValue in self._sourceMediaDescriptor.getTags().items():
if (
tagKey not in self._ignoreGlobalKeys
and tagKey not in self._removeGlobalKeys
):
mediaTags[tagKey] = tagValue
patternId = self._pc.savePatternSchema(
patternObj,
trackDescriptors=self._sourceMediaDescriptor.getTrackDescriptors(),
mediaTags=mediaTags,
)
if patternId:
self.reloadProperties(reset_draft=True)
self.updateMediaTags()
self.updateTracks()
self.updateDifferences()
self.highlightPattern(False)
def action_new_pattern(self):
selectedShowDescriptor = self.getSelectedShowDescriptor()
if selectedShowDescriptor is None:
self.app.push_screen(ShowDetailsScreen(), self.handle_new_pattern)
else:
self.handle_new_pattern(selectedShowDescriptor)
def action_update_pattern(self):
if self._currentPattern is not None:
patternObj = self.getPatternObjFromInput()
if (
patternObj
and self._currentPattern.getPattern() != patternObj["pattern"]
):
updated = self._pc.updatePattern(
self._currentPattern.getId(),
patternObj,
)
if updated:
self.reloadProperties(reset_draft=True)
self.updateMediaTags()
self.updateTracks()
self.updateDifferences()
return updated
tagDifferences = self._mediaChangeSetObj.get(MediaDescriptorChangeSet.TAGS_KEY, {})
for addedTagKey in tagDifferences.get(DIFF_ADDED_KEY, {}).keys():
self._tac.deleteMediaTagByKey(self._currentPattern.getId(), addedTagKey)
for removedTagKey in tagDifferences.get(DIFF_REMOVED_KEY, {}).keys():
currentTags = self._sourceMediaDescriptor.getTags()
self._tac.updateMediaTag(
self._currentPattern.getId(),
removedTagKey,
currentTags[removedTagKey],
)
for changedTagKey in tagDifferences.get(DIFF_CHANGED_KEY, {}).keys():
currentTags = self._sourceMediaDescriptor.getTags()
self._tac.updateMediaTag(
self._currentPattern.getId(),
changedTagKey,
currentTags[changedTagKey],
)
trackDifferences = self._mediaChangeSetObj.get(MediaDescriptorChangeSet.TRACKS_KEY, {})
for trackDescriptor in trackDifferences.get(DIFF_ADDED_KEY, {}).values():
self._tc.addTrack(trackDescriptor, patternId=self._currentPattern.getId())
for trackDescriptor in trackDifferences.get(DIFF_REMOVED_KEY, {}).values():
self._tc.deleteTrack(trackDescriptor.getId())
for trackIndex, trackDiff in trackDifferences.get(DIFF_CHANGED_KEY, {}).items():
targetTracks = [
track
for track in self._targetMediaDescriptor.getTrackDescriptors()
if track.getIndex() == trackIndex
]
targetTrackId = targetTracks[0].getId() if targetTracks else None
targetTrackIndex = targetTracks[0].getIndex() if targetTracks else None
tagsDiff = trackDiff.get(TrackDescriptor.TAGS_KEY, {})
for tagKey, tagValue in tagsDiff.get(DIFF_ADDED_KEY, {}).items():
self._tac.updateTrackTag(targetTrackId, tagKey, tagValue)
for tagKey in tagsDiff.get(DIFF_REMOVED_KEY, {}).keys():
self._tac.deleteTrackTagByKey(targetTrackId, tagKey)
for tagKey, tagValue in tagsDiff.get(DIFF_CHANGED_KEY, {}).items():
self._tac.updateTrackTag(targetTrackId, tagKey, tagValue)
dispositionDiff = trackDiff.get(TrackDescriptor.DISPOSITION_SET_KEY, {})
for changedDisposition in dispositionDiff.get(DIFF_ADDED_KEY, set()):
if targetTrackIndex is not None:
self._tc.setDispositionState(
self._currentPattern.getId(),
targetTrackIndex,
changedDisposition,
True,
)
for changedDisposition in dispositionDiff.get(DIFF_REMOVED_KEY, set()):
if targetTrackIndex is not None:
self._tc.setDispositionState(
self._currentPattern.getId(),
targetTrackIndex,
changedDisposition,
False,
)
self.reloadProperties(reset_draft=True)
self.updateMediaTags()
self.updateTracks()
self.updateDifferences()
def action_edit_pattern(self):
patternObj = self.getPatternObjFromInput()
if patternObj.get("pattern"):
selectedPatternId = self._pc.findPattern(patternObj)
if selectedPatternId is None:
raise click.ClickException(
"InspectDetailsScreen.action_edit_pattern(): Pattern to edit has no id"
)
self.app.push_screen(
PatternDetailsScreen(
patternId=selectedPatternId,
showId=self.getSelectedShowDescriptor().getId(),
),
self.handle_edit_pattern,
)
def handle_edit_pattern(self, screenResult):
self.reloadProperties(reset_draft=True)
if self._currentPattern is not None:
self.query_one("#pattern_input", Input).value = self._currentPattern.getPattern()
self.updateMediaTags()
self.updateTracks()
self.updateDifferences()

View File

@@ -1,6 +1,8 @@
from enum import Enum
import difflib
from .i18n import translate_iso_language
class IsoLanguage(Enum):
@@ -196,11 +198,15 @@ class IsoLanguage(Enum):
@staticmethod
def find(label : str):
closestMatches = difflib.get_close_matches(label, [l.value["name"] for l in IsoLanguage], n=1)
candidate_map = {}
for language in IsoLanguage:
candidate_map[language.value["name"]] = language
candidate_map[translate_iso_language(language.name, language.value["name"])] = language
closestMatches = difflib.get_close_matches(label, list(candidate_map.keys()), n=1)
if closestMatches:
foundLangs = [l for l in IsoLanguage if l.value["name"] == closestMatches[0]]
return foundLangs[0] if foundLangs else IsoLanguage.UNDEFINED
return candidate_map.get(closestMatches[0], IsoLanguage.UNDEFINED)
else:
return IsoLanguage.UNDEFINED
@@ -211,7 +217,7 @@ class IsoLanguage(Enum):
def label(self):
return str(self.value["name"])
return str(translate_iso_language(self.name, self.value["name"]))
def twoLetter(self):
return str(self.value["iso639_1"])

View File

@@ -5,6 +5,7 @@ import os
FFX_LOGGER_NAME = "FFX"
CONSOLE_HANDLER_NAME = "ffx-console"
FILE_HANDLER_NAME = "ffx-file"
MUTED_CONSOLE_LEVEL = logging.CRITICAL + 1
def get_ffx_logger(name: str = FFX_LOGGER_NAME) -> logging.Logger:
@@ -66,3 +67,31 @@ def configure_ffx_logger(
)
return logger
def set_ffx_console_logging_enabled(
logger: logging.Logger | None,
*,
enabled: bool,
):
if logger is None:
return None
console_handler = next(
(handler for handler in logger.handlers if handler.get_name() == CONSOLE_HANDLER_NAME),
None,
)
if console_handler is None:
return None
if enabled:
saved_level = getattr(console_handler, "_ffx_saved_level", None)
if saved_level is not None:
console_handler.setLevel(saved_level)
delattr(console_handler, "_ffx_saved_level")
return console_handler
if not hasattr(console_handler, "_ffx_saved_level"):
console_handler._ffx_saved_level = console_handler.level
console_handler.setLevel(MUTED_CONSOLE_LEVEL)
return console_handler

View File

@@ -2,6 +2,7 @@ import os, re, click
from typing import List, Self
from ffx.attachment_format import AttachmentFormat
from ffx.track_type import TrackType
from ffx.iso_language import IsoLanguage
@@ -421,11 +422,11 @@ class MediaDescriptor:
if sourceMediaDescriptor:
fontDescriptors = [ftd for ftd in sourceMediaDescriptor.getAttachmentTracks()
if ftd.getCodec() == TrackCodec.TTF]
if ftd.getAttachmentFormat() == AttachmentFormat.TTF]
else:
fontDescriptors = [ftd for ftd in self.__trackDescriptors
if ftd.getType() == TrackType.ATTACHMENT
and ftd.getCodec() == TrackCodec.TTF]
and ftd.getAttachmentFormat() == AttachmentFormat.TTF]
for ad in sorted(fontDescriptors, key=lambda d: d.getIndex()):
inputMappingTokens += ["-map", f"0:{ad.getIndex()}"]
@@ -561,3 +562,19 @@ class MediaDescriptor:
yield (f"{td.getIndex()}:{td.getType().indicator()}:{td.getSubIndex()} "
+ '|'.join([d.indicator() for d in td.getDispositionSet()])
+ ' ' + ' '.join([str(k)+'='+str(v) for k,v in td.getTags().items()]))
def clone(self, context: dict | None = None):
kwargs = {
MediaDescriptor.TAGS_KEY: dict(self.__mediaTags),
MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY: [
trackDescriptor.clone(context=context if context is not None else self.__context)
for trackDescriptor in self.__trackDescriptors
],
}
if context is not None:
kwargs[MediaDescriptor.CONTEXT_KEY] = context
elif self.__context:
kwargs[MediaDescriptor.CONTEXT_KEY] = self.__context
return MediaDescriptor(**kwargs)

View File

@@ -8,6 +8,7 @@ from ffx.helper import dictDiff, setDiff, DIFF_ADDED_KEY, DIFF_CHANGED_KEY, DIFF
from ffx.track_codec import TrackCodec
from ffx.track_disposition import TrackDisposition
from ffx.track_type import TrackType
class MediaDescriptorChangeSet():
@@ -29,13 +30,27 @@ class MediaDescriptorChangeSet():
self.__configurationData = self.__context['config'].getData()
metadataConfiguration = self.__configurationData['metadata'] if 'metadata' in self.__configurationData.keys() else {}
applyCleanup = bool(self.__context.get('apply_metadata_cleanup', True))
self.__applyMetadataNormalization = bool(
self.__context.get("apply_metadata_normalization", True)
)
self.__signatureTags = metadataConfiguration['signature'] if 'signature' in metadataConfiguration.keys() else {}
self.__removeGlobalKeys = metadataConfiguration['remove'] if 'remove' in metadataConfiguration.keys() else []
self.__removeGlobalKeys = (
metadataConfiguration['remove']
if applyCleanup and 'remove' in metadataConfiguration.keys()
else []
)
self.__ignoreGlobalKeys = metadataConfiguration['ignore'] if 'ignore' in metadataConfiguration.keys() else []
self.__removeTrackKeys = (metadataConfiguration['streams']['remove']
if 'streams' in metadataConfiguration.keys()
and 'remove' in metadataConfiguration['streams'].keys() else [])
self.__removeTrackKeys = (
metadataConfiguration['streams']['remove']
if (
applyCleanup
and 'streams' in metadataConfiguration.keys()
and 'remove' in metadataConfiguration['streams'].keys()
)
else []
)
self.__ignoreTrackKeys = (metadataConfiguration['streams']['ignore']
if 'streams' in metadataConfiguration.keys()
and 'ignore' in metadataConfiguration['streams'].keys() else [])
@@ -119,7 +134,11 @@ class MediaDescriptorChangeSet():
sourceTrackTags = sourceTrackDescriptor.getTags() if sourceTrackDescriptor is not None else {}
targetTrackTags = (
self.normalizeTrackTags(targetTrackDescriptor.getTags())
self.normalizeTrackTags(
targetTrackDescriptor.getTags(),
trackDescriptor=targetTrackDescriptor,
fallbackTrackTags=sourceTrackTags,
)
if targetTrackDescriptor is not None
else {}
)
@@ -148,7 +167,7 @@ class MediaDescriptorChangeSet():
return trackCompareResult
def normalizeTrackTagValue(self, tagKey, tagValue):
if tagKey != "language":
if not self.__applyMetadataNormalization or tagKey != "language":
return tagValue
if isinstance(tagValue, IsoLanguage):
@@ -160,12 +179,40 @@ class MediaDescriptorChangeSet():
return tagValue
def normalizeTrackTags(self, trackTags: dict):
return {
def resolveTrackLanguage(self, tagValue):
if isinstance(tagValue, IsoLanguage):
return tagValue
trackLanguage = IsoLanguage.findThreeLetter(str(tagValue))
if trackLanguage != IsoLanguage.UNDEFINED:
return trackLanguage
return None
def normalizeTrackTags(
self,
trackTags: dict,
trackDescriptor: TrackDescriptor = None,
fallbackTrackTags: dict = None,
):
normalizedTrackTags = {
tagKey: self.normalizeTrackTagValue(tagKey, tagValue)
for tagKey, tagValue in trackTags.items()
}
if (
self.__applyMetadataNormalization
and trackDescriptor is not None
and trackDescriptor.getType() in (TrackType.VIDEO, TrackType.AUDIO, TrackType.SUBTITLE)
):
trackTitle = str(normalizedTrackTags.get("title", "")).strip()
fallbackTitle = str((fallbackTrackTags or {}).get("title", "")).strip()
trackLanguage = self.resolveTrackLanguage(normalizedTrackTags.get("language"))
if not trackTitle and not fallbackTitle and trackLanguage is not None:
normalizedTrackTags["title"] = trackLanguage.label()
return normalizedTrackTags
def generateDispositionTokens(self):
"""
@@ -213,6 +260,8 @@ class MediaDescriptorChangeSet():
# else:
# dispositionTokens += [f"-disposition:{streamIndicator}:{subIndex}", '0']
for ttd in self.__targetTrackDescriptors:
if ttd.getType() == TrackType.ATTACHMENT:
continue
targetDispositions = ttd.getDispositionSet()
streamIndicator = ttd.getType().indicator()
@@ -267,7 +316,10 @@ class MediaDescriptorChangeSet():
addedTracks: dict = self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY]
trackDescriptor: TrackDescriptor
for trackDescriptor in addedTracks.values():
for tagKey, tagValue in self.normalizeTrackTags(trackDescriptor.getTags()).items():
for tagKey, tagValue in self.normalizeTrackTags(
trackDescriptor.getTags(),
trackDescriptor=trackDescriptor,
).items():
if not tagKey in self.__removeTrackKeys:
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
+ f":{trackDescriptor.getSubIndex()}",
@@ -291,7 +343,11 @@ class MediaDescriptorChangeSet():
trackDescriptor = self.__targetTrackDescriptorsByIndex[trackIndex]
for tagKey, tagValue in self.normalizeTrackTags(outputTrackTags).items():
for tagKey, tagValue in self.normalizeTrackTags(
outputTrackTags,
trackDescriptor=trackDescriptor,
fallbackTrackTags=trackDescriptor.getTags(),
).items():
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
+ f":{trackDescriptor.getSubIndex()}",
f"{tagKey}={tagValue}"]
@@ -309,7 +365,11 @@ class MediaDescriptorChangeSet():
}
| unchangedTrackTags
)
for tagKey, tagValue in self.normalizeTrackTags(preservedTrackTags).items():
for tagKey, tagValue in self.normalizeTrackTags(
preservedTrackTags,
trackDescriptor=trackDescriptor,
fallbackTrackTags=trackDescriptor.getTags(),
).items():
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
+ f":{trackDescriptor.getSubIndex()}",
f"{tagKey}={tagValue}"]

View File

@@ -1,748 +1 @@
import os, click, re
from textual.screen import Screen
from textual.widgets import Header, Footer, Static, Button, Input, DataTable
from textual.containers import Grid
from ffx.audio_layout import AudioLayout
from .show_details_screen import ShowDetailsScreen
from .pattern_details_screen import PatternDetailsScreen
from .screen_support import build_screen_bootstrap, build_screen_controllers
from ffx.track_type import TrackType
from ffx.track_codec import TrackCodec
from ffx.model.track import Track
from ffx.track_disposition import TrackDisposition
from ffx.track_descriptor import TrackDescriptor
from ffx.show_descriptor import ShowDescriptor
from textual.widgets._data_table import CellDoesNotExist
from ffx.media_descriptor import MediaDescriptor
from ffx.file_properties import FileProperties
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
from ffx.helper import formatRichColor, DIFF_ADDED_KEY, DIFF_CHANGED_KEY, DIFF_REMOVED_KEY, DIFF_UNCHANGED_KEY
# Screen[dict[int, str, int]]
class MediaDetailsScreen(Screen):
CSS = """
Grid {
grid-size: 5 8;
grid-rows: 8 2 2 2 2 8 2 2 8;
grid-columns: 15 25 90 10 105;
height: 100%;
width: 100%;
padding: 1;
}
DataTable .datatable--cursor {
background: darkorange;
color: black;
}
DataTable .datatable--header {
background: steelblue;
color: white;
}
Input {
border: none;
}
Button {
border: none;
}
DataTable {
min-height: 40;
}
#toplabel {
height: 1;
}
.two {
column-span: 2;
}
.three {
column-span: 3;
}
.four {
column-span: 4;
}
.five {
column-span: 5;
}
.triple {
row-span: 3;
}
.box {
height: 100%;
border: solid green;
}
.purple {
tint: purple 40%;
}
.yellow {
tint: yellow 40%;
}
#differences-table {
row-span: 8;
/* tint: magenta 40%; */
}
/* #pattern_input {
tint: red 40%;
}*/
"""
TRACKS_TABLE_INDEX_COLUMN_LABEL = "Index"
TRACKS_TABLE_TYPE_COLUMN_LABEL = "Type"
TRACKS_TABLE_SUB_INDEX_COLUMN_LABEL = "SubIndex"
TRACKS_TABLE_CODEC_COLUMN_LABEL = "Codec"
TRACKS_TABLE_LAYOUT_COLUMN_LABEL = "Layout"
TRACKS_TABLE_LANGUAGE_COLUMN_LABEL = "Language"
TRACKS_TABLE_TITLE_COLUMN_LABEL = "Title"
TRACKS_TABLE_DEFAULT_COLUMN_LABEL = "Default"
TRACKS_TABLE_FORCED_COLUMN_LABEL = "Forced"
DIFFERENCES_TABLE_DIFFERENCES_COLUMN_LABEL = 'Differences (file->db/output)'
BINDINGS = [
("n", "new_pattern", "New Pattern"),
("u", "update_pattern", "Update Pattern"),
("e", "edit_pattern", "Edit Pattern"),
]
def __init__(self):
super().__init__()
bootstrap = build_screen_bootstrap(self.app.getContext())
self.context = bootstrap.context
self.__removeGlobalKeys = bootstrap.remove_global_keys
self.__ignoreGlobalKeys = bootstrap.ignore_global_keys
controllers = build_screen_controllers(
self.context,
pattern=True,
show=True,
track=True,
tag=True,
)
self.__pc = controllers['pattern']
self.__sc = controllers['show']
self.__tc = controllers['track']
self.__tac = controllers['tag']
if not 'command' in self.context.keys() or self.context['command'] != 'inspect':
raise click.ClickException(f"MediaDetailsScreen.__init__(): Can only perform command 'inspect'")
if not 'arguments' in self.context.keys() or not 'filename' in self.context['arguments'].keys() or not self.context['arguments']['filename']:
raise click.ClickException(f"MediaDetailsScreen.__init__(): Argument 'filename' is required to be provided for command 'inspect'")
self.__mediaFilename = self.context['arguments']['filename']
if not os.path.isfile(self.__mediaFilename):
raise click.ClickException(f"MediaDetailsScreen.__init__(): Media file {self.__mediaFilename} does not exist")
self.loadProperties()
def removeShow(self, showId : int = -1):
"""Remove show entry from DataTable.
Removes the <New show> entry if showId is not set"""
for rowKey, row in self.showsTable.rows.items(): # dict[RowKey, Row]
rowData = self.showsTable.get_row(rowKey)
try:
if (showId == -1 and rowData[0] == ' '
or showId == int(rowData[0])):
self.showsTable.remove_row(rowKey)
return
except:
continue
def getRowIndexFromShowId(self, showId : int = -1) -> int:
"""Find the index of the row where the value in the specified column matches the target_value."""
for rowKey, row in self.showsTable.rows.items(): # dict[RowKey, Row]
rowData = self.showsTable.get_row(rowKey)
try:
if ((showId == -1 and rowData[0] == ' ')
or showId == int(rowData[0])):
return int(self.showsTable.get_row_index(rowKey))
except:
continue
return None
def loadProperties(self):
self.__mediaFileProperties = FileProperties(self.context, self.__mediaFilename)
self.__sourceMediaDescriptor = self.__mediaFileProperties.getMediaDescriptor()
#HINT: This is None if the filename did not match anything in database
self.__currentPattern = self.__mediaFileProperties.getPattern()
# keine tags vorhanden
self.__targetMediaDescriptor = self.__currentPattern.getMediaDescriptor(self.context) if self.__currentPattern is not None else None
# Enumerating differences between media descriptors
# from file (=current) vs from stored in database (=target)
try:
mdcs = MediaDescriptorChangeSet(self.context,
self.__targetMediaDescriptor,
self.__sourceMediaDescriptor)
self.__mediaChangeSetObj = mdcs.getChangeSetObj()
except ValueError:
self.__mediaChangeSetObj = {}
def updateDifferences(self):
self.loadProperties()
self.differencesTable.clear()
if MediaDescriptorChangeSet.TAGS_KEY in self.__mediaChangeSetObj.keys():
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
for tagKey, tagValue in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_ADDED_KEY].items():
if tagKey not in self.__ignoreGlobalKeys:
row = (f"add media tag: key='{tagKey}' value='{tagValue}'",)
self.differencesTable.add_row(*map(str, row))
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
for tagKey, tagValue in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_REMOVED_KEY].items():
if tagKey not in self.__ignoreGlobalKeys and tagKey not in self.__removeGlobalKeys:
row = (f"remove media tag: key='{tagKey}' value='{tagValue}'",)
self.differencesTable.add_row(*map(str, row))
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
for tagKey, tagValue in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_CHANGED_KEY].items():
if tagKey not in self.__ignoreGlobalKeys:
row = (f"change media tag: key='{tagKey}' value='{tagValue}'",)
self.differencesTable.add_row(*map(str, row))
if MediaDescriptorChangeSet.TRACKS_KEY in self.__mediaChangeSetObj.keys():
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
trackDescriptor: TrackDescriptor
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY].items():
row = (f"add {trackDescriptor.getType().label()} track: index={trackDescriptor.getIndex()} lang={trackDescriptor.getLanguage().threeLetter()}",)
self.differencesTable.add_row(*map(str, row))
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_REMOVED_KEY].items():
row = (f"remove stream #{trackIndex}",)
self.differencesTable.add_row(*map(str, row))
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
changedTracks: dict = self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_CHANGED_KEY]
targetTrackDescriptors = self.__targetMediaDescriptor.getTrackDescriptors()
trackDiffObj: dict
for trackIndex, trackDiffObj in changedTracks.items():
ttd: TrackDescriptor = targetTrackDescriptors[trackIndex]
if MediaDescriptorChangeSet.TAGS_KEY in trackDiffObj.keys():
removedTags = (trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_REMOVED_KEY]
if DIFF_REMOVED_KEY in trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
for tagKey, tagValue in removedTags.items():
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) remove key={tagKey} value={tagValue}",)
self.differencesTable.add_row(*map(str, row))
addedTags = (trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_ADDED_KEY]
if DIFF_ADDED_KEY in trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
for tagKey, tagValue in addedTags.items():
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) add key={tagKey} value={tagValue}",)
self.differencesTable.add_row(*map(str, row))
changedTags = (trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_CHANGED_KEY]
if DIFF_CHANGED_KEY in trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
for tagKey, tagValue in changedTags.items():
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) change key={tagKey} value={tagValue}",)
self.differencesTable.add_row(*map(str, row))
if MediaDescriptorChangeSet.DISPOSITION_SET_KEY in trackDiffObj.keys():
addedDispositions = (trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY][DIFF_ADDED_KEY]
if DIFF_ADDED_KEY in trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY].keys() else set())
for ad in addedDispositions:
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) add disposition={ad.label()}",)
self.differencesTable.add_row(*map(str, row))
removedDispositions = (trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY][DIFF_REMOVED_KEY]
if DIFF_REMOVED_KEY in trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY].keys() else set())
for rd in removedDispositions:
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) remove disposition={rd.label()}",)
self.differencesTable.add_row(*map(str, row))
def on_mount(self):
if self.__currentPattern is None:
row = (' ', '<New show>', ' ') # Convert each element to a string before adding
self.showsTable.add_row(*map(str, row))
for show in self.__sc.getAllShows():
row = (int(show.id), show.name, show.year) # Convert each element to a string before adding
self.showsTable.add_row(*map(str, row))
for mediaTagKey, mediaTagValue in self.__sourceMediaDescriptor.getTags().items():
textColor = None
if mediaTagKey in self.__ignoreGlobalKeys:
textColor = 'blue'
if mediaTagKey in self.__removeGlobalKeys:
textColor = 'red'
row = (formatRichColor(mediaTagKey, textColor), formatRichColor(mediaTagValue, textColor)) # Convert each element to a string before adding
self.mediaTagsTable.add_row(*map(str, row))
self.updateTracks()
if self.__currentPattern is not None:
showIdentifier = self.__currentPattern.getShowId()
showRowIndex = self.getRowIndexFromShowId(showIdentifier)
if showRowIndex is not None:
self.showsTable.move_cursor(row=showRowIndex)
self.query_one("#pattern_input", Input).value = self.__currentPattern.getPattern()
self.updateDifferences()
else:
self.query_one("#pattern_input", Input).value = self.__mediaFilename
self.highlightPattern(True)
def highlightPattern(self, state : bool):
if state:
self.query_one("#pattern_input", Input).styles.background = 'red'
else:
self.query_one("#pattern_input", Input).styles.background = None
def updateTracks(self):
self.tracksTable.clear()
# trackDescriptorList = self.__sourceMediaDescriptor.getAllTrackDescriptors()
trackDescriptorList = self.__sourceMediaDescriptor.getTrackDescriptors()
typeCounter = {}
for td in trackDescriptorList:
trackType = td.getType()
if not trackType in typeCounter.keys():
typeCounter[trackType] = 0
dispoSet = td.getDispositionSet()
audioLayout = td.getAudioLayout()
row = (td.getIndex(),
trackType.label(),
typeCounter[trackType],
td.getCodec().label(),
audioLayout.label() if trackType == TrackType.AUDIO
and audioLayout != AudioLayout.LAYOUT_UNDEFINED else ' ',
td.getLanguage().label(),
td.getTitle(),
'Yes' if TrackDisposition.DEFAULT in dispoSet else 'No',
'Yes' if TrackDisposition.FORCED in dispoSet else 'No')
self.tracksTable.add_row(*map(str, row))
typeCounter[trackType] += 1
def compose(self):
# Create the DataTable widget
self.showsTable = DataTable(classes="two")
# Define the columns with headers
self.column_key_show_id = self.showsTable.add_column("ID", width=10)
self.column_key_show_name = self.showsTable.add_column("Name", width=80)
self.column_key_show_year = self.showsTable.add_column("Year", width=10)
self.showsTable.cursor_type = 'row'
self.mediaTagsTable = DataTable(classes="two")
# Define the columns with headers
self.column_key_track_tag_key = self.mediaTagsTable.add_column("Key", width=30)
self.column_key_track_tag_value = self.mediaTagsTable.add_column("Value", width=70)
self.mediaTagsTable.cursor_type = 'row'
self.tracksTable = DataTable(classes="two")
# Define the columns with headers
self.column_key_track_index = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_INDEX_COLUMN_LABEL, width=5)
self.column_key_track_type = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_TYPE_COLUMN_LABEL, width=10)
self.column_key_track_sub_index = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_SUB_INDEX_COLUMN_LABEL, width=8)
self.column_key_track_codec = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_CODEC_COLUMN_LABEL, width=10)
self.column_key_track_layout = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_LAYOUT_COLUMN_LABEL, width=10)
self.column_key_track_language = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_LANGUAGE_COLUMN_LABEL, width=15)
self.column_key_track_title = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_TITLE_COLUMN_LABEL, width=48)
self.column_key_track_default = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_DEFAULT_COLUMN_LABEL, width=8)
self.column_key_track_forced = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_FORCED_COLUMN_LABEL, width=8)
self.tracksTable.cursor_type = 'row'
# Create the DataTable widget
self.differencesTable = DataTable(id='differences-table') # classes="triple"
# Define the columns with headers
self.column_key_differences = self.differencesTable.add_column(MediaDetailsScreen.DIFFERENCES_TABLE_DIFFERENCES_COLUMN_LABEL, width=100)
self.differencesTable.cursor_type = 'row'
yield Header()
with Grid():
# 1
yield Static("Show")
yield self.showsTable
yield Static(" ")
yield self.differencesTable
# 2
yield Static(" ", classes="four")
# 3
yield Static(" ")
yield Button("Substitute", id="pattern_button")
yield Static(" ", classes="two")
# 4
yield Static("Pattern")
yield Input(type="text", id='pattern_input', classes="two")
yield Static(" ")
# 5
yield Static(" ", classes="four")
# 6
yield Static("Media Tags")
yield self.mediaTagsTable
yield Static(" ")
# 7
yield Static(" ", classes="four")
# 8
yield Static(" ")
yield Button("Set Default", id="select_default_button")
yield Button("Set Forced", id="select_forced_button")
yield Static(" ")
# 9
yield Static("Streams")
yield self.tracksTable
yield Static(" ")
yield Footer()
def getPatternObjFromInput(self):
"""Returns show id and pattern as obj from corresponding inputs"""
patternObj = {}
try:
patternObj['show_id'] = self.getSelectedShowDescriptor().getId()
patternObj['pattern'] = str(self.query_one("#pattern_input", Input).value)
except:
return {}
return patternObj
def on_button_pressed(self, event: Button.Pressed) -> None:
if event.button.id == "pattern_button":
pattern = self.query_one("#pattern_input", Input).value
patternMatch = re.search(FileProperties.SE_INDICATOR_PATTERN, pattern)
if patternMatch:
self.query_one("#pattern_input", Input).value = pattern.replace(patternMatch.group(1), FileProperties.SE_INDICATOR_PATTERN)
if event.button.id == "select_default_button":
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
self.__sourceMediaDescriptor.setDefaultSubTrack(selectedTrackDescriptor.getType(), selectedTrackDescriptor.getSubIndex())
self.updateTracks()
if event.button.id == "select_forced_button":
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
self.__sourceMediaDescriptor.setForcedSubTrack(selectedTrackDescriptor.getType(), selectedTrackDescriptor.getSubIndex())
self.updateTracks()
def getSelectedTrackDescriptor(self):
"""Returns a partial track descriptor"""
try:
# Fetch the currently selected row when 'Enter' is pressed
#selected_row_index = self.table.cursor_row
row_key, col_key = self.tracksTable.coordinate_to_cell_key(self.tracksTable.cursor_coordinate)
if row_key is not None:
selected_track_data = self.tracksTable.get_row(row_key)
kwargs = {}
kwargs[TrackDescriptor.CONTEXT_KEY] = self.context
kwargs[TrackDescriptor.INDEX_KEY] = int(selected_track_data[0])
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.fromLabel(selected_track_data[1])
kwargs[TrackDescriptor.SUB_INDEX_KEY] = int(selected_track_data[2])
kwargs[TrackDescriptor.CODEC_KEY] = TrackCodec.fromLabel(selected_track_data[3])
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = AudioLayout.fromLabel(selected_track_data[4])
return TrackDescriptor(**kwargs)
else:
return None
except CellDoesNotExist:
return None
def getSelectedShowDescriptor(self) -> ShowDescriptor:
try:
row_key, col_key = self.showsTable.coordinate_to_cell_key(self.showsTable.cursor_coordinate)
if row_key is not None:
selected_row_data = self.showsTable.get_row(row_key)
try:
kwargs = {}
kwargs[ShowDescriptor.CONTEXT_KEY] = self.context
kwargs[ShowDescriptor.ID_KEY] = int(selected_row_data[0])
kwargs[ShowDescriptor.NAME_KEY] = str(selected_row_data[1])
kwargs[ShowDescriptor.YEAR_KEY] = int(selected_row_data[2])
return ShowDescriptor(**kwargs)
except ValueError:
return None
except CellDoesNotExist:
return None
def handle_new_pattern(self, showDescriptor: ShowDescriptor):
""""""
if type(showDescriptor) is not ShowDescriptor:
raise TypeError("MediaDetailsScreen.handle_new_pattern(): Argument 'showDescriptor' has to be of type ShowDescriptor")
self.removeShow()
showRowIndex = self.getRowIndexFromShowId(showDescriptor.getId())
if showRowIndex is None:
show = (showDescriptor.getId(), showDescriptor.getName(), showDescriptor.getYear())
self.showsTable.add_row(*map(str, show))
showRowIndex = self.getRowIndexFromShowId(showDescriptor.getId())
if showRowIndex is not None:
self.showsTable.move_cursor(row=showRowIndex)
patternObj = self.getPatternObjFromInput()
if patternObj:
mediaTags = {}
for tagKey, tagValue in self.__sourceMediaDescriptor.getTags().items():
# Filter tags that make no sense to preserve
if tagKey not in self.__ignoreGlobalKeys and not tagKey in self.__removeGlobalKeys:
mediaTags[tagKey] = tagValue
patternId = self.__pc.savePatternSchema(
patternObj,
trackDescriptors=self.__sourceMediaDescriptor.getTrackDescriptors(),
mediaTags=mediaTags,
)
if patternId:
self.highlightPattern(False)
def action_new_pattern(self):
"""Adding new patterns
If the corresponding show does not exists in DB it is added beforehand"""
selectedShowDescriptor = self.getSelectedShowDescriptor()
#HINT: Callback is invoked after this method has exited. As a workaround the callback is executed directly
# from here with a mock-up screen result containing the necessary part of keys to perform correctly.
if selectedShowDescriptor is None:
self.app.push_screen(ShowDetailsScreen(), self.handle_new_pattern)
else:
self.handle_new_pattern(selectedShowDescriptor)
def action_update_pattern(self):
"""Updating patterns
When updating the database the actions must reverse the difference (eq to diff db->file)"""
if self.__currentPattern is not None:
patternObj = self.getPatternObjFromInput()
if (patternObj
and self.__currentPattern.getPattern() != patternObj['pattern']):
return self.__pc.updatePattern(self.__currentPattern.getId(), patternObj)
self.loadProperties()
# __mediaChangeSetObj is file vs database
if MediaDescriptorChangeSet.TAGS_KEY in self.__mediaChangeSetObj.keys():
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
for addedTagKey in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_ADDED_KEY].keys():
# click.ClickException(f"delete media tag patternId={self.__currentPattern.getId()} addedTagKey={addedTagKey}")
self.__tac.deleteMediaTagByKey(self.__currentPattern.getId(), addedTagKey)
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
for removedTagKey in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_REMOVED_KEY].keys():
currentTags = self.__sourceMediaDescriptor.getTags()
# click.ClickException(f"delete media tag patternId={self.__currentPattern.getId()} removedTagKey={removedTagKey} currentTags={currentTags[removedTagKey]}")
self.__tac.updateMediaTag(self.__currentPattern.getId(), removedTagKey, currentTags[removedTagKey])
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
for changedTagKey in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_CHANGED_KEY].keys():
currentTags = self.__sourceMediaDescriptor.getTags()
# click.ClickException(f"delete media tag patternId={self.__currentPattern.getId()} changedTagKey={changedTagKey} currentTags={currentTags[changedTagKey]}")
self.__tac.updateMediaTag(self.__currentPattern.getId(), changedTagKey, currentTags[changedTagKey])
if MediaDescriptorChangeSet.TRACKS_KEY in self.__mediaChangeSetObj.keys():
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY].items():
#targetTracks = [t for t in self.__targetMediaDescriptor.getAllTrackDescriptors() if t.getIndex() == addedTrackIndex]
# if targetTracks:
# self.__tc.deleteTrack(targetTracks[0].getId()) # id
# self.__tc.deleteTrack(targetTracks[0].getId())
self.__tc.addTrack(trackDescriptor, patternId = self.__currentPattern.getId())
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
trackDescriptor: TrackDescriptor
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_REMOVED_KEY].items():
# Track per inspect/update hinzufügen
#self.__tc.addTrack(removedTrack, patternId = self.__currentPattern.getId())
self.__tc.deleteTrack(trackDescriptor.getId())
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
# [vsTracks[tp].getIndex()] = trackDiff
for trackIndex, trackDiff in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_CHANGED_KEY].items():
targetTracks = [t for t in self.__targetMediaDescriptor.getTrackDescriptors() if t.getIndex() == trackIndex]
targetTrackId = targetTracks[0].getId() if targetTracks else None
targetTrackIndex = targetTracks[0].getIndex() if targetTracks else None
changedCurrentTracks = [t for t in self.__sourceMediaDescriptor.getTrackDescriptors() if t.getIndex() == trackIndex]
# changedCurrentTrackId #HINT: Undefined as track descriptors do not come from file with track_id
if TrackDescriptor.TAGS_KEY in trackDiff.keys():
tagsDiff = trackDiff[TrackDescriptor.TAGS_KEY]
if DIFF_ADDED_KEY in tagsDiff.keys():
for tagKey, tagValue in tagsDiff[DIFF_ADDED_KEY].items():
# if targetTracks:
# self.__tac.deleteTrackTagByKey(targetTrackId, addedTrackTagKey)
self.__tac.updateTrackTag(targetTrackId, tagKey, tagValue)
if DIFF_REMOVED_KEY in tagsDiff.keys():
for tagKey, tagValue in tagsDiff[DIFF_REMOVED_KEY].items():
# if changedCurrentTracks:
# self.__tac.updateTrackTag(targetTrackId, removedTrackTagKey, changedCurrentTracks[0].getTags()[removedTrackTagKey])
self.__tac.deleteTrackTagByKey(targetTrackId, tagKey)
if DIFF_CHANGED_KEY in tagsDiff.keys():
for tagKey, tagValue in tagsDiff[DIFF_CHANGED_KEY].items():
# if changedCurrentTracks:
# self.__tac.updateTrackTag(targetTrackId, changedTrackTagKey, changedCurrentTracks[0].getTags()[changedTrackTagKey])
self.__tac.updateTrackTag(targetTrackId, tagKey, tagValue)
if TrackDescriptor.DISPOSITION_SET_KEY in trackDiff.keys():
changedTrackDispositionDiff = trackDiff[TrackDescriptor.DISPOSITION_SET_KEY]
if DIFF_ADDED_KEY in changedTrackDispositionDiff.keys():
for changedDisposition in changedTrackDispositionDiff[DIFF_ADDED_KEY]:
if targetTrackIndex is not None:
self.__tc.setDispositionState(self.__currentPattern.getId(), targetTrackIndex, changedDisposition, True)
if DIFF_REMOVED_KEY in changedTrackDispositionDiff.keys():
for changedDisposition in changedTrackDispositionDiff[DIFF_REMOVED_KEY]:
if targetTrackIndex is not None:
self.__tc.setDispositionState(self.__currentPattern.getId(), targetTrackIndex, changedDisposition, False)
self.updateDifferences()
def action_edit_pattern(self):
patternObj = self.getPatternObjFromInput()
if patternObj['pattern']:
selectedPatternId = self.__pc.findPattern(patternObj)
if selectedPatternId is None:
raise click.ClickException(f"MediaDetailsScreen.action_edit_pattern(): Pattern to edit has no id")
self.app.push_screen(PatternDetailsScreen(patternId = selectedPatternId, showId = self.getSelectedShowDescriptor().getId()), self.handle_edit_pattern) # <-
def handle_edit_pattern(self, screenResult):
self.query_one("#pattern_input", Input).value = screenResult['pattern']
self.updateDifferences()
from .inspect_details_screen import InspectDetailsScreen as MediaDetailsScreen

View File

@@ -0,0 +1,531 @@
import os
from time import monotonic
from textual import events, work
from textual.containers import Grid
from textual.worker import Worker, WorkerState
from textual.widgets import Button, Footer, Header, Static
from ffx.metadata_editor import apply_metadata_edits
from ffx.track_descriptor import TrackDescriptor
from .i18n import t
from .confirm_screen import ConfirmScreen
from .media_workflow_screen_base import MediaWorkflowScreenBase
from .screen_support import build_screen_log_pane, localized_column_width
from .tag_delete_screen import TagDeleteScreen
from .tag_details_screen import TagDetailsScreen
from .track_details_screen import TrackDetailsScreen
from .helper import LogLevel
class MediaEditScreen(MediaWorkflowScreenBase):
GRID_COLUMN_LABEL_MIN = 12
GRID_COLUMN_2 = 20
GRID_COLUMN_3 = 25
GRID_COLUMN_4 = "4fr"
GRID_COLUMN_5 = 12
GRID_COLUMN_6 = "5fr"
CSS = f"""
Grid {{
grid-size: 6 10;
grid-rows: 2 2 2 8 2 2 8 2 8 2 2;
grid-columns: {GRID_COLUMN_LABEL_MIN} {GRID_COLUMN_2} {GRID_COLUMN_3} {GRID_COLUMN_4} {GRID_COLUMN_5} {GRID_COLUMN_6};
height: 100%;
width: 100%;
min-width: 120;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}}
DataTable .datatable--cursor {{
background: darkorange;
color: black;
}}
DataTable .datatable--header {{
background: steelblue;
color: white;
}}
Input {{
border: none;
}}
Button {{
border: none;
}}
DataTable {{
min-height: 24;
width: 100%;
}}
.two {{
column-span: 2;
}}
.three {{
column-span: 3;
}}
.four {{
column-span: 4;
}}
.five {{
column-span: 5;
}}
#differences-table {{
row-span: 10;
}}
#file_label {{
width: 100%;
}}
"""
@classmethod
def _grid_columns_spec(cls, label_column_width: int | None = None) -> str:
return " ".join(
[
str(
cls.GRID_COLUMN_LABEL_MIN
if label_column_width is None
else int(label_column_width)
),
str(cls.GRID_COLUMN_2),
str(cls.GRID_COLUMN_3),
str(cls.GRID_COLUMN_4),
str(cls.GRID_COLUMN_5),
str(cls.GRID_COLUMN_6),
]
)
COMMAND_NAME = "edit"
EDIT_MODE = True
DIFFERENCES_COLUMN_LABEL = "Planned Changes (file->edited output)"
BINDINGS = [
("escape", "back", t("Back")),
("q", "quit_screen", t("Quit")),
("a", "apply_changes", t("Apply")),
("r", "revert_changes", t("Revert")),
]
def compose(self):
self._build_media_tags_table()
self._build_tracks_table()
self._build_differences_table()
yield Header()
with Grid(id="main_grid"):
# Row 1
yield Static(t("File"))
yield Static(self._mediaFilename, id="file_label", classes="three", markup=False)
yield Static(" ")
yield self.differencesTable
# Row 2
yield Static(" ")
yield Button(t("Cleanup"), id="cleanup_toggle_button")
yield Button(t("Normalize"), id="normalize_toggle_button")
yield Static(" ", classes="two")
# Row 3
yield Static(t("Media Tags"))
yield Button(t("Add"), id="button_add_tag")
yield Button(t("Edit"), id="button_edit_tag")
yield Button(t("Delete"), id="button_delete_tag")
yield Static(" ")
# Row 4
yield Static(" ")
yield self.mediaTagsTable
yield Static(" ")
# Row 5
yield Static("", classes="five")
# Row 6
yield Static(t("Streams"))
yield Button(t("Edit"), id="button_edit_track")
yield Button(t("Set Default"), id="select_default_button")
yield Button(t("Set Forced"), id="select_forced_button")
yield Static(" ")
# Row 7
yield Static(" ")
yield self.tracksTable
yield Static(" ")
# Row 8
yield Static("", classes="five")
# Row 9
yield Static(" ")
yield Button(t("Apply"), id="apply_button")
yield Button(t("Revert"), id="revert_button")
yield Button(t("Quit"), id="quit_button")
yield Static(" ")
yield build_screen_log_pane()
yield Footer()
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
self._update_grid_layout()
self.updateMediaTags()
self.updateTracks()
self.updateDifferences()
self.updateToggleButtons()
self._applyChangesWorker = None
def on_screen_resume(self, _event: events.ScreenResume) -> None:
if not hasattr(self, "tracksTable"):
return
self.refreshAfterDraftChange()
self.updateToggleButtons()
def _update_grid_layout(self) -> None:
leftColumnWidth = max(
localized_column_width(t("File"), self.GRID_COLUMN_LABEL_MIN),
localized_column_width(t("Media Tags"), self.GRID_COLUMN_LABEL_MIN),
localized_column_width(t("Streams"), self.GRID_COLUMN_LABEL_MIN),
)
grid = self.query_one("#main_grid", Grid)
grid.styles.grid_columns = self._grid_columns_spec(leftColumnWidth)
def action_back(self):
self.action_quit_screen()
def setMessage(self, message: str):
self._messageText = str(message)
if self._messageText:
self.notify(self._messageText)
def workerLoggingHandler(self,
message: str,
level: LogLevel = LogLevel.INFO) -> None:
if level == LogLevel.DEBUG:
self.context["logger"].debug(str(message))
elif level == LogLevel.INFO:
self.context["logger"].info(str(message))
elif level == LogLevel.WARNING:
self.context["logger"].warning(str(message))
elif level == LogLevel.ERROR:
self.context["logger"].error(str(message))
elif level == LogLevel.CRITICAL:
self.context["logger"].critical(str(message))
else:
raise Exception(f"Undefined Logging Level (msg={message})")
def _report_apply_timings(self, applyResult: dict, reloadSeconds: float = 0.0) -> None:
timings = dict(applyResult.get("timings", {}))
ffmpegSeconds = float(timings.get("ffmpeg_seconds", 0.0))
replaceSeconds = float(timings.get("replace_seconds", 0.0))
writeSeconds = float(timings.get("write_seconds", ffmpegSeconds + replaceSeconds))
reloadSeconds = float(reloadSeconds)
totalSeconds = writeSeconds + reloadSeconds
timingSummary = (
f"ffx edit timings: ffmpeg={ffmpegSeconds:.2f}s "
+ f"replace={replaceSeconds:.2f}s "
+ f"reload={reloadSeconds:.2f}s "
+ f"total={totalSeconds:.2f}s"
)
self.context["logger"].info(timingSummary)
def updateToggleButtons(self):
self._set_toggle_button_state(
"#cleanup_toggle_button",
t("Cleanup"),
self._applyCleanup,
)
self._set_toggle_button_state(
"#normalize_toggle_button",
t("Normalize"),
self._applyNormalization,
)
def _set_toggle_button_state(self, selector: str, label: str, enabled: bool):
try:
button = self.query_one(selector, Button)
except Exception:
return
button.label = label
button.styles.color = "black" if enabled else "white"
button.styles.background = "darkorange" if enabled else "black"
def refreshAfterDraftChange(self):
self.updateMediaTags()
self.updateTracks()
self.updateDifferences()
def on_button_pressed(self, event: Button.Pressed) -> None:
if event.button.id == "select_default_button":
if self.setSelectedTrackDefault():
self.refreshAfterDraftChange()
if event.button.id == "select_forced_button":
if self.setSelectedTrackForced():
self.refreshAfterDraftChange()
if event.button.id == "button_add_tag":
self.app.push_screen(TagDetailsScreen(), self.handle_update_media_tag)
if event.button.id == "button_edit_tag":
selectedTag = self.getSelectedMediaTag()
if selectedTag is not None:
self.app.push_screen(
TagDetailsScreen(key=selectedTag[0], value=selectedTag[1]),
self.handle_update_media_tag,
)
if event.button.id == "button_delete_tag":
selectedTag = self.getSelectedMediaTag()
if selectedTag is not None:
self.app.push_screen(
TagDeleteScreen(key=selectedTag[0], value=selectedTag[1]),
self.handle_delete_media_tag,
)
if event.button.id == "button_edit_track":
self.action_edit_selected_track()
if event.button.id == "cleanup_toggle_button":
self.action_toggle_cleanup()
if event.button.id == "normalize_toggle_button":
self.action_toggle_normalization()
if event.button.id == "apply_button":
self.action_apply_changes()
if event.button.id == "revert_button":
self.action_revert_changes()
if event.button.id == "quit_button":
self.action_quit_screen()
def action_edit_selected_track(self):
selectedTrack = self.getSelectedTrackDescriptor()
if selectedTrack is None:
self.setMessage(t("Select a stream first."))
return
self.app.push_screen(
TrackDetailsScreen(
trackDescriptor=selectedTrack,
patternLabel=os.path.basename(self._mediaFilename),
siblingTrackDescriptors=self._sourceMediaDescriptor.getTrackDescriptors(),
metadata_only=True,
),
self.handle_edit_track,
)
def action_toggle_cleanup(self):
self.setApplyCleanup(not self._applyCleanup)
self.updateToggleButtons()
self.updateMediaTags()
self.updateDifferences()
self.setMessage(
t("Cleanup enabled.") if self._applyCleanup else t("Cleanup disabled.")
)
def action_toggle_normalization(self):
self.setApplyNormalization(not self._applyNormalization)
self.updateToggleButtons()
self.updateTracks()
self.updateDifferences()
self.setMessage(
t("Normalization enabled.")
if self._applyNormalization
else t("Normalization disabled.")
)
def handle_update_media_tag(self, tag):
if tag is None:
return
self._sourceMediaDescriptor.getTags()[str(tag[0])] = str(tag[1])
self.setMessage(t("Updated media tag {tag!r}.", tag=tag[0]))
self.refreshAfterDraftChange()
def handle_delete_media_tag(self, tag):
if tag is None:
return
self._sourceMediaDescriptor.getTags().pop(str(tag[0]), None)
self.setMessage(t("Deleted media tag {tag!r}.", tag=tag[0]))
self.refreshAfterDraftChange()
def handle_edit_track(self, trackDescriptor: TrackDescriptor):
if trackDescriptor is None:
return
nextSourceMediaDescriptor = self._sourceMediaDescriptor.clone(context=self.context)
updatedTracks = nextSourceMediaDescriptor.getTrackDescriptors()
replacementTrack = trackDescriptor.clone(context=self.context)
replaced = False
for trackIndex, currentTrack in enumerate(updatedTracks):
sameSourceTrack = (
currentTrack.getSourceIndex() == replacementTrack.getSourceIndex()
and currentTrack.getType() == replacementTrack.getType()
)
sameVisibleTrack = (
currentTrack.getIndex() == replacementTrack.getIndex()
and currentTrack.getSubIndex() == replacementTrack.getSubIndex()
)
if sameSourceTrack or sameVisibleTrack:
updatedTracks[trackIndex] = replacementTrack
replaced = True
break
if not replaced:
self.setMessage(t("Unable to update selected stream."))
return
self._sourceMediaDescriptor = nextSourceMediaDescriptor
self.setMessage(
t(
"Updated stream #{index} ({track_type}).",
index=replacementTrack.getIndex(),
track_type=t(replacementTrack.getType().label()),
)
)
self.refreshAfterDraftChange()
def action_apply_changes(self):
if not self.hasPendingChanges():
self.setMessage(t("No changes to apply."))
return
if self._applyChangesWorker is not None and self._applyChangesWorker.is_running:
self.setMessage(t("Apply already running."))
return
self.context["logger"].info(
t("Starting metadata apply for {filename}.", filename=self._mediaFilename)
)
self._applyChangesWorker = self.run_apply_changes_worker()
@work(
thread=True,
exclusive=True,
group="media-edit-apply",
exit_on_error=False,
)
def run_apply_changes_worker(self):
return apply_metadata_edits(
self.context,
self._mediaFilename,
self._baselineMediaDescriptor,
self._sourceMediaDescriptor,
loggingHandler = self.workerLoggingHandler,
)
def on_worker_state_changed(self, event: Worker.StateChanged) -> None:
if event.worker is not self._applyChangesWorker:
return
if event.state == WorkerState.ERROR:
error = event.worker.error
if error is not None:
self.context["logger"].error(
"Failed to apply metadata edits for %s",
self._mediaFilename,
exc_info=(type(error), error, error.__traceback__),
)
self.setMessage(t("Apply failed: {error}", error=error))
self._applyChangesWorker = None
return
if event.state != WorkerState.SUCCESS:
return
applyResult = event.worker.result or {}
if applyResult.get("dry_run", False):
self._report_apply_timings(applyResult, reloadSeconds=0.0)
self.context["logger"].info(
t(
"Dry-run prepared temporary output {target_path}.",
target_path=applyResult["target_path"],
),
)
self.setMessage(
t(
"Dry-run: would rewrite via temporary file {target_path}",
target_path=applyResult["target_path"],
)
)
self._applyChangesWorker = None
return
reloadStart = monotonic()
self.context["logger"].info(t("Reloading file after metadata write."))
self.reloadProperties(reset_draft=True)
self.refreshAfterDraftChange()
reloadSeconds = monotonic() - reloadStart
self._report_apply_timings(applyResult, reloadSeconds=reloadSeconds)
self.context["logger"].info(t("Changes applied and file reloaded."))
self.setMessage(t("Changes applied and file reloaded."))
self._applyChangesWorker = None
def action_revert_changes(self):
if not self.hasPendingChanges():
self.setMessage(t("No changes to revert."))
return
self.app.push_screen(
ConfirmScreen(
t("Discard pending metadata changes and reload the file state?"),
confirm_label=t("Discard"),
cancel_label=t("Keep Editing"),
),
self.handle_revert_confirmation,
)
def handle_revert_confirmation(self, confirmed):
if not confirmed:
self.setMessage(t("Keeping pending changes."))
return
self.reloadProperties(reset_draft=True)
self.refreshAfterDraftChange()
self.setMessage(t("Reverted pending changes."))
def action_quit_screen(self):
if self.hasPendingChanges():
self.app.push_screen(
ConfirmScreen(
t("Discard pending metadata changes and quit?"),
confirm_label=t("Discard"),
cancel_label=t("Stay"),
),
self.handle_quit_confirmation,
)
return
self.app.exit()
def handle_quit_confirmation(self, confirmed):
if confirmed:
self.app.exit()
else:
self.setMessage(t("Continuing edit session."))

View File

@@ -0,0 +1,434 @@
import os
import click
from textual.screen import Screen
from textual.widgets import DataTable
from textual.widgets._data_table import CellDoesNotExist
from ffx.attachment_format import AttachmentFormat
from ffx.audio_layout import AudioLayout
from ffx.file_properties import FileProperties
from ffx.helper import DIFF_ADDED_KEY, DIFF_CHANGED_KEY, DIFF_REMOVED_KEY
from ffx.iso_language import IsoLanguage
from ffx.media_descriptor import MediaDescriptor
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
from ffx.track_descriptor import TrackDescriptor
from ffx.track_disposition import TrackDisposition
from ffx.track_type import TrackType
from .i18n import t
from .screen_support import add_auto_table_column, build_screen_bootstrap, populate_tag_table
class MediaWorkflowScreenBase(Screen):
TRACKS_TABLE_INDEX_COLUMN_LABEL = "Index"
TRACKS_TABLE_TYPE_COLUMN_LABEL = "Type"
TRACKS_TABLE_SUB_INDEX_COLUMN_LABEL = "SubIndex"
TRACKS_TABLE_CODEC_COLUMN_LABEL = "Codec"
TRACKS_TABLE_LAYOUT_COLUMN_LABEL = "Layout"
TRACKS_TABLE_LANGUAGE_COLUMN_LABEL = "Language"
TRACKS_TABLE_TITLE_COLUMN_LABEL = "Title"
TRACKS_TABLE_DEFAULT_COLUMN_LABEL = "Default"
TRACKS_TABLE_FORCED_COLUMN_LABEL = "Forced"
DIFFERENCES_COLUMN_LABEL = "Differences"
COMMAND_NAME = ""
EDIT_MODE = False
def __init__(self):
super().__init__()
bootstrap = build_screen_bootstrap(self.app.getContext())
self.context = bootstrap.context
self._applyCleanup = False
self._applyNormalization = bool(self.context.get("apply_metadata_normalization", True))
self._removeGlobalKeys = []
self._ignoreGlobalKeys = []
self._apply_bootstrap_settings(bootstrap)
command = self.context.get("command")
if command != self.COMMAND_NAME:
raise click.ClickException(
f"{type(self).__name__}.__init__(): Can only perform command '{self.COMMAND_NAME}'"
)
arguments = self.context.get("arguments", {})
self._mediaFilename = arguments.get("filename", "")
if not self._mediaFilename:
raise click.ClickException(
f"{type(self).__name__}.__init__(): Argument 'filename' is required"
)
if not os.path.isfile(self._mediaFilename):
raise click.ClickException(
f"{type(self).__name__}.__init__(): Media file {self._mediaFilename} does not exist"
)
self._baselineMediaDescriptor = None
self._sourceMediaDescriptor = None
self._targetMediaDescriptor = None
self._currentPattern = None
self._mediaChangeSetObj = {}
self._messageText = ""
self._trackRowData: dict[object, TrackDescriptor] = {}
self._sourceMediaTagRowData: dict[object, tuple[str, str]] = {}
self.reloadProperties(reset_draft=True)
def _apply_bootstrap_settings(self, bootstrap) -> None:
self._applyCleanup = bootstrap.apply_cleanup
self._removeGlobalKeys = bootstrap.remove_global_keys
self._ignoreGlobalKeys = bootstrap.ignore_global_keys
def refreshCleanupSettings(self) -> None:
self._apply_bootstrap_settings(build_screen_bootstrap(self.context))
def setApplyCleanup(self, enabled: bool) -> None:
self.context["apply_metadata_cleanup"] = bool(enabled)
self.refreshCleanupSettings()
def refreshNormalizationSettings(self) -> None:
self._applyNormalization = bool(
self.context.get("apply_metadata_normalization", True)
)
def setApplyNormalization(self, enabled: bool) -> None:
self.context["apply_metadata_normalization"] = bool(enabled)
self.refreshNormalizationSettings()
def _build_media_tags_table(self):
self.mediaTagsTable = DataTable(classes="three")
add_auto_table_column(self.mediaTagsTable, t("Key"))
add_auto_table_column(self.mediaTagsTable, t("Value"))
self.mediaTagsTable.cursor_type = "row"
def _build_tracks_table(self):
self.tracksTable = DataTable(classes="three")
self._configure_tracks_table_columns()
self.tracksTable.cursor_type = "row"
def _configure_tracks_table_columns(self):
add_auto_table_column(self.tracksTable, t(self.TRACKS_TABLE_INDEX_COLUMN_LABEL))
add_auto_table_column(self.tracksTable, t(self.TRACKS_TABLE_TYPE_COLUMN_LABEL))
add_auto_table_column(self.tracksTable, t(self.TRACKS_TABLE_SUB_INDEX_COLUMN_LABEL))
add_auto_table_column(self.tracksTable, t(self.TRACKS_TABLE_CODEC_COLUMN_LABEL))
add_auto_table_column(self.tracksTable, t(self.TRACKS_TABLE_LAYOUT_COLUMN_LABEL))
add_auto_table_column(self.tracksTable, t(self.TRACKS_TABLE_LANGUAGE_COLUMN_LABEL))
add_auto_table_column(self.tracksTable, t(self.TRACKS_TABLE_TITLE_COLUMN_LABEL))
add_auto_table_column(self.tracksTable, t(self.TRACKS_TABLE_DEFAULT_COLUMN_LABEL))
add_auto_table_column(self.tracksTable, t(self.TRACKS_TABLE_FORCED_COLUMN_LABEL))
def _build_differences_table(self):
self.differencesTable = DataTable(id="differences-table")
add_auto_table_column(self.differencesTable, t(self.DIFFERENCES_COLUMN_LABEL))
self.differencesTable.cursor_type = "row"
def _track_codec_cell_value(self, trackDescriptor: TrackDescriptor) -> str:
if trackDescriptor.getType() == TrackType.ATTACHMENT:
attachmentFormat = trackDescriptor.getAttachmentFormat()
if attachmentFormat == AttachmentFormat.UNKNOWN:
return attachmentFormat.identifier()
return attachmentFormat.label()
return trackDescriptor.getFormatDescriptor().label()
def _track_language_cell_value(self, trackDescriptor: TrackDescriptor) -> str:
if trackDescriptor.getType() == TrackType.ATTACHMENT:
return " "
return trackDescriptor.getLanguage().label()
def _track_disposition_cell_value(
self,
trackDescriptor: TrackDescriptor,
disposition: TrackDisposition,
) -> str:
if trackDescriptor.getType() == TrackType.ATTACHMENT:
return " "
return (
t("Yes")
if disposition in trackDescriptor.getDispositionSet()
else t("No")
)
def reloadProperties(self, reset_draft: bool = True):
self._mediaFileProperties = FileProperties(self.context, self._mediaFilename)
probedMediaDescriptor = self._mediaFileProperties.getMediaDescriptor()
if self.EDIT_MODE:
self._baselineMediaDescriptor = probedMediaDescriptor
if reset_draft or self._sourceMediaDescriptor is None:
self._sourceMediaDescriptor = probedMediaDescriptor.clone(context=self.context)
self._targetMediaDescriptor = self._sourceMediaDescriptor
self._currentPattern = None
else:
self._baselineMediaDescriptor = probedMediaDescriptor
self._sourceMediaDescriptor = probedMediaDescriptor
self._currentPattern = self._mediaFileProperties.getPattern()
self._targetMediaDescriptor = (
self._currentPattern.getMediaDescriptor(self.context)
if self._currentPattern is not None
else None
)
self.rebuildChangeSet()
def rebuildChangeSet(self):
try:
if self.EDIT_MODE:
mdcs = MediaDescriptorChangeSet(
self.context,
self._sourceMediaDescriptor,
self._baselineMediaDescriptor,
)
else:
if self._targetMediaDescriptor is None:
self._mediaChangeSetObj = {}
return
mdcs = MediaDescriptorChangeSet(
self.context,
self._targetMediaDescriptor,
self._sourceMediaDescriptor,
)
self._mediaChangeSetObj = mdcs.getChangeSetObj()
except ValueError:
self._mediaChangeSetObj = {}
def hasPendingChanges(self) -> bool:
return bool(self._mediaChangeSetObj)
def getDisplayedMediaDescriptor(self) -> MediaDescriptor | None:
return self._sourceMediaDescriptor
def getTrackEditSourceDescriptor(self) -> TrackDescriptor | None:
return self.getSelectedTrackDescriptor()
def updateMediaTags(self):
displayedMediaDescriptor = self.getDisplayedMediaDescriptor()
self._sourceMediaTagRowData = populate_tag_table(
self.mediaTagsTable,
displayedMediaDescriptor.getTags() if displayedMediaDescriptor is not None else {},
ignore_keys=self._ignoreGlobalKeys,
remove_keys=self._removeGlobalKeys,
)
def updateTracks(self):
self.tracksTable.clear(columns=True)
self._configure_tracks_table_columns()
self._trackRowData = {}
displayedMediaDescriptor = self.getDisplayedMediaDescriptor()
trackDescriptorList = (
displayedMediaDescriptor.getTrackDescriptors()
if displayedMediaDescriptor is not None
else []
)
typeCounter = {}
applyNormalization = bool(getattr(self, "_applyNormalization", False))
for trackDescriptor in trackDescriptorList:
trackType = trackDescriptor.getType()
if trackType not in typeCounter:
typeCounter[trackType] = 0
dispositionSet = trackDescriptor.getDispositionSet()
audioLayout = trackDescriptor.getAudioLayout()
trackTitle = trackDescriptor.getTitle()
if (
applyNormalization
and not str(trackTitle).strip()
and trackType in (TrackType.VIDEO, TrackType.AUDIO, TrackType.SUBTITLE)
):
trackLanguage = trackDescriptor.getLanguage()
if trackLanguage != IsoLanguage.UNDEFINED:
trackTitle = trackLanguage.label()
row = (
trackDescriptor.getIndex(),
t(trackType.label()),
typeCounter[trackType],
self._track_codec_cell_value(trackDescriptor),
t(audioLayout.label())
if trackType == TrackType.AUDIO
and audioLayout != AudioLayout.LAYOUT_UNDEFINED
else " ",
self._track_language_cell_value(trackDescriptor),
trackTitle,
self._track_disposition_cell_value(
trackDescriptor,
TrackDisposition.DEFAULT,
),
self._track_disposition_cell_value(
trackDescriptor,
TrackDisposition.FORCED,
),
)
row_key = self.tracksTable.add_row(*map(str, row))
self._trackRowData[row_key] = trackDescriptor
typeCounter[trackType] += 1
def updateDifferences(self):
self.rebuildChangeSet()
self.differencesTable.clear()
if not self.EDIT_MODE and self._currentPattern is None:
return
targetDescriptor = (
self._sourceMediaDescriptor
if self.EDIT_MODE
else self._targetMediaDescriptor
)
targetTrackDescriptorsByIndex = {
trackDescriptor.getIndex(): trackDescriptor
for trackDescriptor in (
targetDescriptor.getTrackDescriptors()
if targetDescriptor is not None
else []
)
}
tagDifferences = self._mediaChangeSetObj.get(MediaDescriptorChangeSet.TAGS_KEY, {})
for tagKey, tagValue in tagDifferences.get(DIFF_ADDED_KEY, {}).items():
if tagKey not in self._ignoreGlobalKeys:
self.differencesTable.add_row(
t("add media tag: key='{key}' value='{value}'", key=tagKey, value=tagValue)
)
for tagKey, tagValue in tagDifferences.get(DIFF_REMOVED_KEY, {}).items():
if tagKey in self._ignoreGlobalKeys:
continue
if not self.EDIT_MODE and tagKey in self._removeGlobalKeys:
continue
self.differencesTable.add_row(
t("remove media tag: key='{key}' value='{value}'", key=tagKey, value=tagValue)
)
for tagKey, tagValue in tagDifferences.get(DIFF_CHANGED_KEY, {}).items():
if tagKey not in self._ignoreGlobalKeys:
self.differencesTable.add_row(
t("change media tag: key='{key}' value='{value}'", key=tagKey, value=tagValue)
)
trackDifferences = self._mediaChangeSetObj.get(MediaDescriptorChangeSet.TRACKS_KEY, {})
for trackDescriptor in trackDifferences.get(DIFF_ADDED_KEY, {}).values():
self.differencesTable.add_row(
t(
"add {track_type} track: index={index} lang={language}",
track_type=t(trackDescriptor.getType().label()),
index=trackDescriptor.getIndex(),
language=trackDescriptor.getLanguage().threeLetter(),
)
)
for trackIndex in trackDifferences.get(DIFF_REMOVED_KEY, {}).keys():
self.differencesTable.add_row(t("remove stream #{index}", index=trackIndex))
for trackIndex, trackDiffObj in trackDifferences.get(DIFF_CHANGED_KEY, {}).items():
targetTrackDescriptor = targetTrackDescriptorsByIndex.get(trackIndex)
if targetTrackDescriptor is None:
continue
tagsDiff = trackDiffObj.get(MediaDescriptorChangeSet.TAGS_KEY, {})
for tagKey, tagValue in tagsDiff.get(DIFF_REMOVED_KEY, {}).items():
self.differencesTable.add_row(
t(
"change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}",
index=targetTrackDescriptor.getIndex(),
track_type=t(targetTrackDescriptor.getType().label()),
sub_index=targetTrackDescriptor.getSubIndex(),
key=tagKey,
value=tagValue,
)
)
for tagKey, tagValue in tagsDiff.get(DIFF_ADDED_KEY, {}).items():
self.differencesTable.add_row(
t(
"change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}",
index=targetTrackDescriptor.getIndex(),
track_type=t(targetTrackDescriptor.getType().label()),
sub_index=targetTrackDescriptor.getSubIndex(),
key=tagKey,
value=tagValue,
)
)
for tagKey, tagValue in tagsDiff.get(DIFF_CHANGED_KEY, {}).items():
self.differencesTable.add_row(
t(
"change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}",
index=targetTrackDescriptor.getIndex(),
track_type=t(targetTrackDescriptor.getType().label()),
sub_index=targetTrackDescriptor.getSubIndex(),
key=tagKey,
value=tagValue,
)
)
dispositionDiff = trackDiffObj.get(MediaDescriptorChangeSet.DISPOSITION_SET_KEY, {})
for addedDisposition in dispositionDiff.get(DIFF_ADDED_KEY, set()):
self.differencesTable.add_row(
t(
"change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}",
index=targetTrackDescriptor.getIndex(),
track_type=t(targetTrackDescriptor.getType().label()),
sub_index=targetTrackDescriptor.getSubIndex(),
disposition=t(addedDisposition.label()),
)
)
for removedDisposition in dispositionDiff.get(DIFF_REMOVED_KEY, set()):
self.differencesTable.add_row(
t(
"change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}",
index=targetTrackDescriptor.getIndex(),
track_type=t(targetTrackDescriptor.getType().label()),
sub_index=targetTrackDescriptor.getSubIndex(),
disposition=t(removedDisposition.label()),
)
)
def getSelectedMediaTag(self):
try:
row_key, _ = self.mediaTagsTable.coordinate_to_cell_key(
self.mediaTagsTable.cursor_coordinate
)
if row_key is not None:
return self._sourceMediaTagRowData.get(row_key)
return None
except CellDoesNotExist:
return None
def getSelectedTrackDescriptor(self):
try:
row_key, _ = self.tracksTable.coordinate_to_cell_key(
self.tracksTable.cursor_coordinate
)
if row_key is not None:
return self._trackRowData.get(row_key)
return None
except CellDoesNotExist:
return None
def setSelectedTrackDefault(self):
selectedTrackDescriptor = self.getTrackEditSourceDescriptor()
if selectedTrackDescriptor is None:
return False
self._sourceMediaDescriptor.setDefaultSubTrack(
selectedTrackDescriptor.getType(),
selectedTrackDescriptor.getSubIndex(),
)
return True
def setSelectedTrackForced(self):
selectedTrackDescriptor = self.getTrackEditSourceDescriptor()
if selectedTrackDescriptor is None:
return False
self._sourceMediaDescriptor.setForcedSubTrack(
selectedTrackDescriptor.getType(),
selectedTrackDescriptor.getSubIndex(),
)
return True

177
src/ffx/metadata_editor.py Normal file
View File

@@ -0,0 +1,177 @@
from __future__ import annotations
import click
import os
import tempfile
from time import monotonic
from .constants import (
DEFAULT_AC3_BANDWIDTH,
DEFAULT_DTS_BANDWIDTH,
DEFAULT_STEREO_BANDWIDTH,
FFMPEG_COMMAND_TOKENS,
)
from .media_descriptor import MediaDescriptor
from .media_descriptor_change_set import MediaDescriptorChangeSet
from .process import executeProcess, formatCommandSequence
from .video_encoder import VideoEncoder
from .helper import LogLevel
def create_temporary_output_path(source_path: str) -> str:
sourceDirectory = os.path.dirname(os.path.abspath(source_path)) or "."
sourceBasename = os.path.basename(source_path)
sourceStem, sourceExtension = os.path.splitext(sourceBasename)
descriptor, temporaryPath = tempfile.mkstemp(
prefix=f".{sourceStem}.ffx-edit-",
suffix=sourceExtension or ".tmp",
dir=sourceDirectory,
)
os.close(descriptor)
os.unlink(temporaryPath)
return temporaryPath
def build_metadata_edit_context(context: dict) -> dict:
editContext = dict(context)
editContext["video_encoder"] = VideoEncoder.COPY
editContext["perform_cut"] = False
editContext["no_signature"] = bool(editContext.get("no_signature", True))
editContext["resource_limits"] = dict(editContext.get("resource_limits", {}))
editContext["bitrates"] = dict(
editContext.get(
"bitrates",
{
"stereo": f"{DEFAULT_STEREO_BANDWIDTH}k",
"ac3": f"{DEFAULT_AC3_BANDWIDTH}k",
"dts": f"{DEFAULT_DTS_BANDWIDTH}k",
},
)
)
editContext["encoding_metadata_tags"] = {}
return editContext
def build_metadata_edit_command(
context: dict,
source_path: str,
target_path: str,
baseline_descriptor: MediaDescriptor,
draft_descriptor: MediaDescriptor,
) -> list[str]:
changeSet = MediaDescriptorChangeSet(context, draft_descriptor, baseline_descriptor)
return (
list(FFMPEG_COMMAND_TOKENS)
+ ["-i", source_path, "-map", "0", "-c", "copy"]
+ changeSet.generateMetadataTokens()
+ changeSet.generateDispositionTokens()
+ [target_path]
)
def notify_ffmpeg_invocation(
context: dict,
command_sequence: list[str],
*,
loggingHandler = None,
dry_run: bool = False,
) -> None:
loggingCallback = loggingHandler or context.get("logging_handler")
if not callable(loggingCallback):
return
verbosity = int(context.get("verbosity", 0) or 0)
if verbosity > 0:
if dry_run:
loggingCallback(f"ffmpeg dry-run: {formatCommandSequence(command_sequence)}", level = LogLevel.DEBUG)
else:
loggingCallback(f"ffmpeg: {formatCommandSequence(command_sequence)}", level = LogLevel.DEBUG)
return
loggingCallback("ffmpeg dry-run prepared.") if dry_run else loggingCallback(
"ffmpeg metadata write started."
)
def apply_metadata_edits(
context: dict,
source_path: str,
baseline_descriptor: MediaDescriptor,
draft_descriptor: MediaDescriptor,
*,
loggingHandler = None,
) -> dict[str, object]:
temporaryOutputPath = create_temporary_output_path(source_path)
editContext = build_metadata_edit_context(context)
commandSequence = build_metadata_edit_command(
editContext,
source_path,
temporaryOutputPath,
baseline_descriptor,
draft_descriptor,
)
ffmpegSeconds = 0.0
replaceSeconds = 0.0
try:
if editContext.get("dry_run", False):
notify_ffmpeg_invocation(
editContext,
commandSequence,
loggingHandler = loggingHandler,
dry_run=True,
)
return {
"applied": False,
"dry_run": True,
"target_path": temporaryOutputPath,
"command_sequence": commandSequence,
"timings": {
"ffmpeg_seconds": ffmpegSeconds,
"replace_seconds": replaceSeconds,
"write_seconds": ffmpegSeconds + replaceSeconds,
},
}
notify_ffmpeg_invocation(editContext,
commandSequence,
loggingHandler = loggingHandler)
ffmpegStart = monotonic()
_out, err, rc = executeProcess(commandSequence, context=editContext)
ffmpegSeconds = monotonic() - ffmpegStart
if rc:
raise click.ClickException(f"ffmpeg edit failed: rc={rc} error={err}")
replaceStart = monotonic()
os.replace(temporaryOutputPath, source_path)
replaceSeconds = monotonic() - replaceStart
return {
"applied": True,
"dry_run": False,
"target_path": source_path,
"command_sequence": commandSequence,
"timings": {
"ffmpeg_seconds": ffmpegSeconds,
"replace_seconds": replaceSeconds,
"write_seconds": ffmpegSeconds + replaceSeconds,
},
}
except Exception:
if os.path.exists(temporaryOutputPath):
os.remove(temporaryOutputPath)
raise

View File

@@ -4,6 +4,7 @@ from sqlalchemy.orm import relationship, declarative_base, sessionmaker
from .show import Base
from ffx.attachment_format import AttachmentFormat
from ffx.track_type import TrackType
from ffx.iso_language import IsoLanguage
@@ -132,9 +133,16 @@ class Track(Base):
if trackType in [t.label() for t in TrackType]:
if trackType == TrackType.ATTACHMENT.label():
storedFormatIdentifier = AttachmentFormat.identifyFfprobeStream(streamObj).identifier()
else:
storedFormatIdentifier = TrackCodec.identify(
streamObj.get(TrackDescriptor.FFPROBE_CODEC_KEY)
).identifier()
return cls(pattern_id = patternId,
track_type = trackType,
codec_name = streamObj[TrackDescriptor.FFPROBE_CODEC_NAME_KEY],
codec_name = storedFormatIdentifier,
disposition_flags = sum([2**t.index() for (k,v) in streamObj[TrackDescriptor.FFPROBE_DISPOSITION_KEY].items()
if v and (t := TrackDisposition.find(k)) is not None]),
audio_layout = AudioLayout.identify(streamObj))
@@ -153,8 +161,20 @@ class Track(Base):
return TrackType.fromIndex(self.track_type)
def getCodec(self) -> TrackCodec:
if self.getType() == TrackType.ATTACHMENT:
return TrackCodec.UNKNOWN
return TrackCodec.identify(self.codec_name)
def getAttachmentFormat(self) -> AttachmentFormat:
if self.getType() != TrackType.ATTACHMENT:
return AttachmentFormat.UNKNOWN
return AttachmentFormat.identify(self.codec_name)
def getFormatDescriptor(self):
if self.getType() == TrackType.ATTACHMENT:
return self.getAttachmentFormat()
return self.getCodec()
def getIndex(self):
return int(self.index) if self.index is not None else -1
@@ -206,7 +226,10 @@ class Track(Base):
kwargs[TrackDescriptor.SUB_INDEX_KEY] = subIndex
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = self.getType()
kwargs[TrackDescriptor.CODEC_KEY] = self.getCodec()
if self.getType() == TrackType.ATTACHMENT:
kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY] = self.getAttachmentFormat()
else:
kwargs[TrackDescriptor.CODEC_KEY] = self.getCodec()
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = self.getDispositionSet()
kwargs[TrackDescriptor.TAGS_KEY] = self.getTags()

View File

@@ -134,7 +134,7 @@ class PatternController:
def _build_track_row(self, trackDescriptor: TrackDescriptor) -> Track:
track = Track(
track_type=int(trackDescriptor.getType().index()),
codec_name=str(trackDescriptor.getCodec().identifier()),
codec_name=str(trackDescriptor.getFormatDescriptor().identifier()),
index=int(trackDescriptor.getIndex()),
source_index=int(trackDescriptor.getSourceIndex()),
disposition_flags=int(

View File

@@ -4,8 +4,10 @@ from textual.screen import Screen
from textual.widgets import Header, Footer, Static, Button
from textual.containers import Grid
from .i18n import t
from .show_controller import ShowController
from .pattern_controller import PatternController
from .screen_support import build_screen_log_pane, go_back_or_exit
from ffx.model.pattern import Pattern
@@ -13,15 +15,22 @@ from ffx.model.pattern import Pattern
# Screen[dict[int, str, int]]
class PatternDeleteScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
CSS = """
Grid {
grid-size: 2;
grid-rows: 2 auto;
grid-columns: 30 330;
grid-columns: 18 5fr;
height: 100%;
width: 100%;
min-width: 90;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
Input {
@@ -59,6 +68,10 @@ class PatternDeleteScreen(Screen):
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
if self.__showDescriptor:
self.query_one("#showlabel", Static).update(f"{self.__showDescriptor.getId()} - {self.__showDescriptor.getName()} ({self.__showDescriptor.getYear()})")
if not self.__pattern is None:
@@ -70,24 +83,31 @@ class PatternDeleteScreen(Screen):
yield Header()
with Grid():
# Row 1
yield Static(t("Are you sure to delete the following filename pattern?"), id="toplabel", classes="two")
yield Static("Are you sure to delete the following filename pattern?", id="toplabel", classes="two")
# Row 2
yield Static("", classes="two")
yield Static("Pattern")
# Row 3
yield Static(t("Pattern"))
yield Static("", id="patternlabel")
# Row 4
yield Static("", classes="two")
yield Static("from show")
# Row 5
yield Static(t("from show"))
yield Static("", id="showlabel")
# Row 6
yield Static("", classes="two")
yield Button("Delete", id="delete_button")
yield Button("Cancel", id="cancel_button")
# Row 7
yield Button(t("Delete"), id="delete_button")
yield Button(t("Cancel"), id="cancel_button")
yield build_screen_log_pane()
yield Footer()
@@ -109,3 +129,5 @@ class PatternDeleteScreen(Screen):
if event.button.id == "cancel_button":
self.app.pop_screen()
def action_back(self):
go_back_or_exit(self)

View File

@@ -1,6 +1,7 @@
import click, re
from typing import List
from textual import events
from textual.screen import Screen
from textual.widgets import Header, Footer, Static, Button, Input, DataTable, TextArea
from textual.containers import Grid
@@ -14,7 +15,14 @@ from .shifted_season_details_screen import ShiftedSeasonDetailsScreen
from .tag_details_screen import TagDetailsScreen
from .tag_delete_screen import TagDeleteScreen
from .screen_support import build_screen_bootstrap, build_screen_controllers
from .screen_support import (
add_auto_table_column,
build_screen_bootstrap,
build_screen_controllers,
build_screen_log_pane,
go_back_or_exit,
populate_tag_table,
)
from ffx.track_type import TrackType
@@ -27,22 +35,28 @@ from ffx.file_properties import FileProperties
from ffx.iso_language import IsoLanguage
from ffx.audio_layout import AudioLayout
from ffx.model.shifted_season import ShiftedSeason
from ffx.helper import formatRichColor, removeRichColor
from .i18n import t
# Screen[dict[int, str, int]]
class PatternDetailsScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
CSS = """
Grid {
grid-size: 7 20;
grid-rows: 2 2 2 2 2 2 6 2 2 8 2 2 8 2 2 8 2 2 2 2;
grid-columns: 25 25 25 25 25 25 25;
grid-columns: 18 1fr 1fr 1fr 1fr 1fr 1fr;
height: 100%;
width: 100%;
min-width: 140;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
Input {
@@ -54,6 +68,7 @@ class PatternDetailsScreen(Screen):
DataTable {
min-height: 6;
width: 100%;
}
DataTable .datatable--cursor {
@@ -73,6 +88,9 @@ class PatternDetailsScreen(Screen):
.three {
column-span: 3;
}
.two {
column-span: 2;
}
.four {
column-span: 4;
@@ -99,7 +117,7 @@ class PatternDetailsScreen(Screen):
}
.yellow {
tint: yellow 40%;
color: yellow;
}
"""
@@ -130,11 +148,15 @@ class PatternDetailsScreen(Screen):
self.__showDescriptor = self.__sc.getShowDescriptor(showId) if showId is not None else None
self.__draftTracks : List[TrackDescriptor] = []
self.__draftTags : dict[str, str] = {}
self.__trackRowData: dict[object, TrackDescriptor] = {}
self.__tagRowData: dict[object, tuple[str, str]] = {}
self.__shiftedSeasonRowData: dict[object, dict[str, int | None]] = {}
def updateTracks(self):
self.tracksTable.clear()
self.__trackRowData = {}
tracks = self.getCurrentTrackDescriptors()
@@ -154,18 +176,19 @@ class PatternDetailsScreen(Screen):
audioLayout = td.getAudioLayout()
row = (td.getIndex(),
trackType.label(),
t(trackType.label()),
typeCounter[trackType],
td.getCodec().label(),
audioLayout.label() if trackType == TrackType.AUDIO
td.getFormatDescriptor().label(),
t(audioLayout.label()) if trackType == TrackType.AUDIO
and audioLayout != AudioLayout.LAYOUT_UNDEFINED else ' ',
trackLanguage.label() if trackLanguage != IsoLanguage.UNDEFINED else ' ',
td.getTitle(),
'Yes' if TrackDisposition.DEFAULT in dispoSet else 'No',
'Yes' if TrackDisposition.FORCED in dispoSet else 'No',
t('Yes') if TrackDisposition.DEFAULT in dispoSet else t('No'),
t('Yes') if TrackDisposition.FORCED in dispoSet else t('No'),
td.getSourceIndex())
self.tracksTable.add_row(*map(str, row))
row_key = self.tracksTable.add_row(*map(str, row))
self.__trackRowData[row_key] = td
typeCounter[trackType] += 1
@@ -243,29 +266,23 @@ class PatternDetailsScreen(Screen):
def updateTags(self):
self.tagsTable.clear()
tags = (
self.__tac.findAllMediaTags(self.__pattern.getId())
if self.__pattern is not None
else self.__draftTags
)
for tagKey, tagValue in tags.items():
textColor = None
if tagKey in self.__ignoreGlobalKeys:
textColor = 'blue'
if tagKey in self.__removeGlobalKeys:
textColor = 'red'
row = (formatRichColor(tagKey, textColor), formatRichColor(tagValue, textColor))
self.tagsTable.add_row(*map(str, row))
self.__tagRowData = populate_tag_table(
self.tagsTable,
tags,
ignore_keys=self.__ignoreGlobalKeys,
remove_keys=self.__removeGlobalKeys,
)
def updateShiftedSeasons(self):
self.shiftedSeasonsTable.clear()
self.__shiftedSeasonRowData = {}
if self.__pattern is None:
return
@@ -273,6 +290,7 @@ class PatternDetailsScreen(Screen):
shiftedSeason: ShiftedSeason
for shiftedSeason in self.__ssc.getShiftedSeasonSiblings(patternId=self.__pattern.getId()):
shiftedSeasonObj = shiftedSeason.getObj()
shiftedSeasonObj['id'] = shiftedSeason.getId()
firstEpisode = shiftedSeasonObj['first_episode']
firstEpisodeStr = str(firstEpisode) if firstEpisode != -1 else ''
@@ -288,7 +306,8 @@ class PatternDetailsScreen(Screen):
shiftedSeasonObj['episode_offset'],
)
self.shiftedSeasonsTable.add_row(*map(str, row))
row_key = self.shiftedSeasonsTable.add_row(*map(str, row))
self.__shiftedSeasonRowData[row_key] = shiftedSeasonObj
def getSelectedShiftedSeasonObjFromInput(self):
@@ -300,29 +319,7 @@ class PatternDetailsScreen(Screen):
)
if row_key is not None:
selected_row_data = self.shiftedSeasonsTable.get_row(row_key)
def parse_int_or_default(value: str, default: int) -> int:
try:
return int(value)
except (TypeError, ValueError):
return default
shiftedSeasonObj['original_season'] = int(selected_row_data[0])
shiftedSeasonObj['first_episode'] = parse_int_or_default(selected_row_data[1], -1)
shiftedSeasonObj['last_episode'] = parse_int_or_default(selected_row_data[2], -1)
shiftedSeasonObj['season_offset'] = parse_int_or_default(selected_row_data[3], 0)
shiftedSeasonObj['episode_offset'] = parse_int_or_default(selected_row_data[4], 0)
if self.__pattern is not None:
shiftedSeasonId = self.__ssc.findShiftedSeason(
patternId=self.__pattern.getId(),
originalSeason=shiftedSeasonObj['original_season'],
firstEpisode=shiftedSeasonObj['first_episode'],
lastEpisode=shiftedSeasonObj['last_episode'],
)
if shiftedSeasonId is not None:
shiftedSeasonObj['id'] = shiftedSeasonId
shiftedSeasonObj = dict(self.__shiftedSeasonRowData.get(row_key, {}))
except CellDoesNotExist:
pass
@@ -332,8 +329,12 @@ class PatternDetailsScreen(Screen):
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
if not self.__showDescriptor is None:
self.query_one("#showlabel", Static).update(f"{self.__showDescriptor.getId()} - {self.__showDescriptor.getName()} ({self.__showDescriptor.getYear()})")
self.updateShowQualityHint()
if self.__pattern is not None:
@@ -349,40 +350,51 @@ class PatternDetailsScreen(Screen):
self.updateTracks()
self.updateShiftedSeasons()
def on_screen_resume(self, _event: events.ScreenResume) -> None:
if not hasattr(self, "tracksTable") or not hasattr(self, "tagsTable"):
return
self.updateShowQualityHint()
self.updateTags()
self.updateTracks()
if self.__pattern is not None and hasattr(self, "shiftedSeasonsTable"):
self.updateShiftedSeasons()
def compose(self):
self.tagsTable = DataTable(classes="seven")
# Define the columns with headers
self.column_key_tag_key = self.tagsTable.add_column("Key", width=50)
self.column_key_tag_value = self.tagsTable.add_column("Value", width=100)
self.column_key_tag_key = add_auto_table_column(self.tagsTable, t("Key"))
self.column_key_tag_value = add_auto_table_column(self.tagsTable, t("Value"))
self.tagsTable.cursor_type = 'row'
self.tracksTable = DataTable(id="tracks_table", classes="seven")
self.column_key_track_index = self.tracksTable.add_column("Index", width=5)
self.column_key_track_type = self.tracksTable.add_column("Type", width=10)
self.column_key_track_sub_index = self.tracksTable.add_column("SubIndex", width=8)
self.column_key_track_codec = self.tracksTable.add_column("Codec", width=10)
self.column_key_track_audio_layout = self.tracksTable.add_column("Layout", width=10)
self.column_key_track_language = self.tracksTable.add_column("Language", width=15)
self.column_key_track_title = self.tracksTable.add_column("Title", width=48)
self.column_key_track_default = self.tracksTable.add_column("Default", width=8)
self.column_key_track_forced = self.tracksTable.add_column("Forced", width=8)
self.column_key_track_source_index = self.tracksTable.add_column("SrcIndex", width=8)
self.column_key_track_index = add_auto_table_column(self.tracksTable, t("Index"))
self.column_key_track_type = add_auto_table_column(self.tracksTable, t("Type"))
self.column_key_track_sub_index = add_auto_table_column(self.tracksTable, t("SubIndex"))
self.column_key_track_codec = add_auto_table_column(self.tracksTable, t("Codec"))
self.column_key_track_audio_layout = add_auto_table_column(self.tracksTable, t("Layout"))
self.column_key_track_language = add_auto_table_column(self.tracksTable, t("Language"))
self.column_key_track_title = add_auto_table_column(self.tracksTable, t("Title"))
self.column_key_track_default = add_auto_table_column(self.tracksTable, t("Default"))
self.column_key_track_forced = add_auto_table_column(self.tracksTable, t("Forced"))
self.column_key_track_source_index = add_auto_table_column(self.tracksTable, t("SrcIndex"))
self.tracksTable.cursor_type = 'row'
self.shiftedSeasonsTable = DataTable(classes="seven")
self.column_key_original_season = self.shiftedSeasonsTable.add_column("Source Season", width=18)
self.column_key_first_episode = self.shiftedSeasonsTable.add_column("First Episode", width=18)
self.column_key_last_episode = self.shiftedSeasonsTable.add_column("Last Episode", width=18)
self.column_key_season_offset = self.shiftedSeasonsTable.add_column("Season Offset", width=18)
self.column_key_episode_offset = self.shiftedSeasonsTable.add_column("Episode Offset", width=18)
self.column_key_original_season = add_auto_table_column(self.shiftedSeasonsTable, t("Source Season"))
self.column_key_first_episode = add_auto_table_column(self.shiftedSeasonsTable, t("First Episode"))
self.column_key_last_episode = add_auto_table_column(self.shiftedSeasonsTable, t("Last Episode"))
self.column_key_season_offset = add_auto_table_column(self.shiftedSeasonsTable, t("Season Offset"))
self.column_key_episode_offset = add_auto_table_column(self.shiftedSeasonsTable, t("Episode Offset"))
self.shiftedSeasonsTable.cursor_type = 'row'
@@ -391,47 +403,49 @@ class PatternDetailsScreen(Screen):
with Grid():
# 1
yield Static("Edit filename pattern" if self.__pattern is not None else "New filename pattern", id="toplabel")
# Row 1
yield Static(t("Edit filename pattern") if self.__pattern is not None else t("New filename pattern"), id="toplabel")
yield Input(type="text", id="pattern_input", classes="six")
# 2
yield Static("from show")
# Row 2
yield Static(t("from show"))
yield Static("", id="showlabel", classes="five")
yield Button("Substitute pattern", id="pattern_button")
yield Button(t("Substitute pattern"), id="pattern_button")
# 3
# Row 3
yield Static(" ", classes="seven")
# 4
yield Static("Quality")
# Row 4
yield Static(t("Quality"))
yield Input(type="integer", id="quality_input")
yield Static(' ', classes="five")
yield Static(" ")
yield Static("", id="show_quality_hint", classes="two yellow")
yield Static(' ', classes="two")
# 5
# Row 5
yield Static(" ", classes="seven")
# 6
yield Static("Notes")
# Row 6
yield Static(t("Notes"))
yield Static(" ", classes="six")
# 7
# Row 7
yield TextArea(id="notes_textarea", classes="four_box seven")
# 8
# Row 8
yield Static(" ", classes="seven")
# 9
yield Static("Shifted Seasons")
# Row 9
yield Static(t("Numbering Mapping"))
if self.__pattern is not None:
yield Button("Add", id="button_add_shifted_season")
yield Button("Edit", id="button_edit_shifted_season")
yield Button("Delete", id="button_delete_shifted_season")
yield Button(t("Add"), id="button_add_shifted_season")
yield Button(t("Edit"), id="button_edit_shifted_season")
yield Button(t("Delete"), id="button_delete_shifted_season")
else:
yield Static(" ")
yield Static(" ")
@@ -441,61 +455,79 @@ class PatternDetailsScreen(Screen):
yield Static(" ")
yield Static(" ")
# 10
# Row 10
yield self.shiftedSeasonsTable
# 11
# Row 11
yield Static(" ", classes="seven")
# 12
yield Static("Media Tags")
yield Button("Add", id="button_add_tag")
yield Button("Edit", id="button_edit_tag")
yield Button("Delete", id="button_delete_tag")
# Row 12
yield Static(t("Media Tags"))
yield Button(t("Add"), id="button_add_tag")
yield Button(t("Edit"), id="button_edit_tag")
yield Button(t("Delete"), id="button_delete_tag")
yield Static(" ")
yield Static(" ")
yield Static(" ")
# 13
# Row 13
yield self.tagsTable
# 14
# Row 14
yield Static(" ", classes="seven")
# 15
yield Static("Streams")
yield Button("Add", id="button_add_track")
yield Button("Edit", id="button_edit_track")
yield Button("Delete", id="button_delete_track")
# Row 15
yield Static(t("Streams"))
yield Button(t("Add"), id="button_add_track")
yield Button(t("Edit"), id="button_edit_track")
yield Button(t("Delete"), id="button_delete_track")
yield Static(" ")
yield Button("Up", id="button_track_up")
yield Button("Down", id="button_track_down")
yield Button(t("Up"), id="button_track_up")
yield Button(t("Down"), id="button_track_down")
# 16
# Row 16
yield self.tracksTable
# 17
# Row 17
yield Static(" ", classes="seven")
# 18
# Row 18
yield Static(" ", classes="seven")
# 19
yield Button("Save", id="save_button")
yield Button("Cancel", id="cancel_button")
# Row 19
yield Button(t("Save"), id="save_button")
yield Button(t("Cancel"), id="cancel_button")
yield Static(" ", classes="five")
# 20
# Row 20
yield Static(" ", classes="seven")
yield build_screen_log_pane()
yield Footer()
def getPatternFromInput(self):
return str(self.query_one("#pattern_input", Input).value)
def getShowQualityHintText(self):
if self.__showDescriptor is None:
return ""
showQuality = int(self.__showDescriptor.getQuality() or 0)
if showQuality <= 0:
return ""
patternQuality = int(getattr(self.__pattern, "quality", 0) or 0)
if patternQuality > 0:
return ""
return f"{t('Show')}: {showQuality}"
def updateShowQualityHint(self):
self.query_one("#show_quality_hint", Static).update(self.getShowQualityHintText())
def getQualityFromInput(self):
try:
return int(self.query_one("#quality_input", Input).value)
@@ -513,15 +545,7 @@ class PatternDetailsScreen(Screen):
row_key, col_key = self.tracksTable.coordinate_to_cell_key(self.tracksTable.cursor_coordinate)
if row_key is not None:
selected_track_data = self.tracksTable.get_row(row_key)
trackIndex = int(selected_track_data[0])
trackSubIndex = int(selected_track_data[2])
for trackDescriptor in self.getCurrentTrackDescriptors():
if (trackDescriptor.getIndex() == trackIndex
and trackDescriptor.getSubIndex() == trackSubIndex):
return trackDescriptor
return self.__trackRowData.get(row_key)
return None
@@ -539,12 +563,7 @@ class PatternDetailsScreen(Screen):
row_key, col_key = self.tagsTable.coordinate_to_cell_key(self.tagsTable.cursor_coordinate)
if row_key is not None:
selected_tag_data = self.tagsTable.get_row(row_key)
tagKey = removeRichColor(selected_tag_data[0])
tagValue = removeRichColor(selected_tag_data[1])
return tagKey, tagValue
return self.__tagRowData.get(row_key)
else:
return None
@@ -790,5 +809,8 @@ class PatternDetailsScreen(Screen):
def handle_update_shifted_season(self, screenResult):
self.updateShiftedSeasons()
def action_back(self):
go_back_or_exit(self)
def handle_delete_shifted_season(self, screenResult):
self.updateShiftedSeasons()

View File

@@ -1,7 +1,10 @@
import os
import shlex
import signal
import subprocess
from typing import Iterable, List
import threading
import time
from typing import Callable, Iterable, List
from .logging_utils import get_ffx_logger
@@ -118,6 +121,8 @@ def executeProcess(
directory: str = None,
context: dict = None,
timeoutSeconds: float = None,
stdoutLineHandler: Callable[[str], bool] | None = None,
stderrLineHandler: Callable[[str], bool] | None = None,
):
logger = context['logger'] if context is not None and 'logger' in context else get_ffx_logger()
@@ -131,6 +136,16 @@ def executeProcess(
formatCommandSequence(wrappedCommandSequence),
)
if stdoutLineHandler is not None or stderrLineHandler is not None:
return executeStreamingProcess(
wrappedCommandSequence,
directory=directory,
logger=logger,
timeoutSeconds=timeoutSeconds,
stdoutLineHandler=stdoutLineHandler,
stderrLineHandler=stderrLineHandler,
)
try:
completed = subprocess.run(
wrappedCommandSequence,
@@ -167,3 +182,162 @@ def executeProcess(
)
return completed.stdout, completed.stderr, completed.returncode
def terminateProcess(process: subprocess.Popen, *, killAfterSeconds: float = 1.0) -> None:
if process.poll() is not None:
return
try:
if hasattr(os, "killpg"):
os.killpg(process.pid, signal.SIGTERM)
else:
process.terminate()
except ProcessLookupError:
return
deadline = time.monotonic() + killAfterSeconds
while process.poll() is None and time.monotonic() < deadline:
time.sleep(0.05)
if process.poll() is not None:
return
try:
if hasattr(os, "killpg"):
os.killpg(process.pid, signal.SIGKILL)
else:
process.kill()
except ProcessLookupError:
return
def readProcessStream(
stream,
outputParts: list[str],
lineHandler: Callable[[str], bool] | None,
stopRequested: threading.Event,
logger,
) -> None:
try:
for line in iter(stream.readline, ''):
outputParts.append(line)
if lineHandler is None:
continue
try:
if lineHandler(line):
stopRequested.set()
except Exception:
logger.exception("Process line handler raised an exception")
finally:
stream.close()
def executeStreamingProcess(
commandSequence: List[str],
*,
directory: str = None,
logger = None,
timeoutSeconds: float = None,
stdoutLineHandler: Callable[[str], bool] | None = None,
stderrLineHandler: Callable[[str], bool] | None = None,
):
logger = logger or get_ffx_logger()
try:
process = subprocess.Popen(
commandSequence,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
cwd=directory,
bufsize=1,
start_new_session=True,
)
except FileNotFoundError as ex:
error = (
"Command not found while running "
+ f"{formatCommandSequence(commandSequence)}: {ex.filename or ex}"
)
logger.error(error)
return '', error, COMMAND_NOT_FOUND_RETURN_CODE
stdoutParts: list[str] = []
stderrParts: list[str] = []
stopRequested = threading.Event()
timedOut = False
stdoutThread = threading.Thread(
target=readProcessStream,
args=(
process.stdout,
stdoutParts,
stdoutLineHandler,
stopRequested,
logger,
),
daemon=True,
)
stderrThread = threading.Thread(
target=readProcessStream,
args=(
process.stderr,
stderrParts,
stderrLineHandler,
stopRequested,
logger,
),
daemon=True,
)
stdoutThread.start()
stderrThread.start()
deadline = (
time.monotonic() + float(timeoutSeconds)
if timeoutSeconds is not None
else None
)
terminationRequested = False
while process.poll() is None:
if stopRequested.is_set():
terminationRequested = True
terminateProcess(process)
break
if deadline is not None and time.monotonic() >= deadline:
timedOut = True
terminationRequested = True
terminateProcess(process)
break
time.sleep(0.05)
returnCode = process.wait()
stdoutThread.join()
stderrThread.join()
stdout = ''.join(stdoutParts)
stderr = ''.join(stderrParts)
if timedOut:
error = (
f"Command timed out after {timeoutSeconds} seconds while running "
+ formatCommandSequence(commandSequence)
)
if stderr:
error = f"{error}\n{stderr}"
logger.error(error)
return stdout, error, COMMAND_TIMED_OUT_RETURN_CODE
if returnCode != 0 and not terminationRequested:
logger.warning(
"executeProcess() rc=%s command=%s",
returnCode,
formatCommandSequence(commandSequence),
)
return stdout, stderr, returnCode

View File

@@ -1,7 +1,18 @@
from __future__ import annotations
import logging
import weakref
from collections.abc import Mapping
from dataclasses import dataclass
from rich.cells import cell_len
from rich.measure import measure_renderables
from rich.text import Text
from textual import events
from textual.widgets import Collapsible, RichLog, Static
from .helper import formatRichColor
from .i18n import t
from .pattern_controller import PatternController
from .show_controller import ShowController
from .shifted_season_controller import ShiftedSeasonController
@@ -10,11 +21,162 @@ from .tmdb_controller import TmdbController
from .track_controller import TrackController
SCREEN_LOG_PANE_ID = "screen_log_pane"
SCREEN_LOG_VIEW_ID = "screen_log_view"
SCREEN_LOG_RESIZE_HANDLE_ID = "screen_log_resize_handle"
SCREEN_LOG_HANDLER_NAME = "ffx-screen-log"
SCREEN_LOG_DEFAULT_HEIGHT = 8
SCREEN_LOG_MIN_HEIGHT = 4
SCREEN_LOG_COMPONENT_WIDTH = 16
SCREEN_LOG_LEVEL_WIDTH = 8
_SCREEN_LOG_PANE_ENABLED = False
class ScreenLogHandler(logging.Handler):
"""Mirror logger output into the active screen log pane when available."""
def __init__(self, app) -> None:
super().__init__(level=logging.DEBUG)
self.set_name(SCREEN_LOG_HANDLER_NAME)
self.set_app(app)
def set_app(self, app) -> None:
self._app_ref = weakref.ref(app) if app is not None else lambda: None
def emit(self, record: logging.LogRecord) -> None:
app = self._app_ref()
if app is None:
return
try:
message = str(self.format(record)).strip()
except Exception:
self.handleError(record)
return
if not message:
return
try:
app.call_from_thread(write_screen_log, app.screen, message)
except RuntimeError:
write_screen_log(app.screen, message)
except Exception:
self.handleError(record)
class ScreenLogResizeHandle(Static):
DEFAULT_CSS = """
ScreenLogResizeHandle {
width: 100%;
height: 1;
content-align: center middle;
color: $text-muted;
background: $panel-lighten-1;
}
ScreenLogResizeHandle:hover {
color: $text;
background: $panel-lighten-2;
}
"""
def __init__(self) -> None:
super().__init__(" drag to resize ", id=SCREEN_LOG_RESIZE_HANDLE_ID)
self._drag_active = False
self._drag_origin_screen_y = 0
self._drag_origin_height = SCREEN_LOG_DEFAULT_HEIGHT
def _get_log_pane(self):
return self.parent.parent if self.parent is not None else None
def on_mouse_down(self, event: events.MouseDown) -> None:
if event.button != 1:
return
log_pane = self._get_log_pane()
if log_pane is None:
return
self._drag_active = True
self._drag_origin_screen_y = event.screen_y
self._drag_origin_height = log_pane.get_log_height()
self.capture_mouse()
event.stop()
def on_mouse_move(self, event: events.MouseMove) -> None:
if not self._drag_active:
return
log_pane = self._get_log_pane()
if log_pane is None:
return
next_height = self._drag_origin_height + (
self._drag_origin_screen_y - event.screen_y
)
log_pane.set_log_height(next_height)
event.stop()
def on_mouse_up(self, event: events.MouseUp) -> None:
if not self._drag_active:
return
self._drag_active = False
self.release_mouse()
event.stop()
class ResizableScreenLogPane(Collapsible):
def __init__(self) -> None:
self._log_view = RichLog(
id=SCREEN_LOG_VIEW_ID,
wrap=True,
markup=False,
highlight=False,
auto_scroll=True,
)
self._log_height = SCREEN_LOG_DEFAULT_HEIGHT
self._apply_log_height()
super().__init__(
ScreenLogResizeHandle(),
self._log_view,
title=t("Log"),
collapsed=True,
id=SCREEN_LOG_PANE_ID,
)
self.styles.width = "100%"
def _apply_log_height(self) -> None:
self._log_view.styles.height = self._log_height
self._log_view.styles.width = "100%"
def get_log_height(self) -> int:
return int(self._log_height)
def set_log_height(self, height: int) -> None:
next_height = max(SCREEN_LOG_MIN_HEIGHT, int(height))
try:
available_height = int(self.app.size.height) - 8
except Exception:
available_height = next_height
if available_height > 0:
next_height = min(next_height, available_height)
self._log_height = next_height
self._apply_log_height()
@dataclass(frozen=True)
class ScreenBootstrap:
context: dict
configuration_data: dict
signature_tags: dict
apply_cleanup: bool
remove_global_keys: list
ignore_global_keys: list
remove_track_keys: list
@@ -25,18 +187,62 @@ def build_screen_bootstrap(context: dict) -> ScreenBootstrap:
configurationData = context['config'].getData()
metadataConfiguration = configurationData.get('metadata', {})
streamMetadataConfiguration = metadataConfiguration.get('streams', {})
applyCleanup = bool(context.get('apply_metadata_cleanup', True))
return ScreenBootstrap(
context=context,
configuration_data=configurationData,
signature_tags=metadataConfiguration.get('signature', {}),
remove_global_keys=metadataConfiguration.get('remove', []),
apply_cleanup=applyCleanup,
remove_global_keys=metadataConfiguration.get('remove', []) if applyCleanup else [],
ignore_global_keys=metadataConfiguration.get('ignore', []),
remove_track_keys=streamMetadataConfiguration.get('remove', []),
remove_track_keys=streamMetadataConfiguration.get('remove', []) if applyCleanup else [],
ignore_track_keys=streamMetadataConfiguration.get('ignore', []),
)
def set_screen_log_pane_enabled(enabled: bool) -> None:
global _SCREEN_LOG_PANE_ENABLED
_SCREEN_LOG_PANE_ENABLED = bool(enabled)
def is_screen_log_pane_enabled() -> bool:
return bool(_SCREEN_LOG_PANE_ENABLED)
def configure_screen_log_handler(logger, app, *, enabled: bool):
if logger is None:
return None
screen_log_handler = next(
(handler for handler in logger.handlers if handler.get_name() == SCREEN_LOG_HANDLER_NAME),
None,
)
if not enabled:
if screen_log_handler is not None:
logger.removeHandler(screen_log_handler)
screen_log_handler.close()
return None
if screen_log_handler is None:
screen_log_handler = ScreenLogHandler(app)
logger.addHandler(screen_log_handler)
elif isinstance(screen_log_handler, ScreenLogHandler):
screen_log_handler.set_app(app)
screen_log_handler.setLevel(logging.DEBUG)
screen_log_handler.setFormatter(
logging.Formatter(
f"%(name)-{SCREEN_LOG_COMPONENT_WIDTH}s "
+ f"%(levelname)-{SCREEN_LOG_LEVEL_WIDTH}s "
+ "%(asctime)s | %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
)
return screen_log_handler
def build_screen_controllers(
context: dict,
*,
@@ -63,3 +269,125 @@ def build_screen_controllers(
controllers['shifted_season'] = ShiftedSeasonController(context=context)
return controllers
def populate_tag_table(
table,
tags: Mapping[str, object],
*,
ignore_keys: list[str],
remove_keys: list[str],
) -> dict[object, tuple[str, str]]:
"""Render display rows while keeping raw tag data addressable by row key."""
table.clear()
row_data: dict[object, tuple[str, str]] = {}
for tag_key, tag_value in tags.items():
raw_key = str(tag_key)
raw_value = str(tag_value)
text_color = None
if raw_key in ignore_keys:
text_color = "blue"
if raw_key in remove_keys:
text_color = "red"
row_key = table.add_row(
str(formatRichColor(raw_key, text_color)),
str(formatRichColor(raw_value, text_color)),
)
row_data[row_key] = (raw_key, raw_value)
return row_data
def localized_column_width(label: str, minimum: int, *, padding: int = 2) -> int:
"""Ensure translated table headers fit within their visible column width."""
text = str(label)
return max(
int(minimum),
len(text) + int(padding),
int(cell_len(text)) + int(padding),
)
def add_auto_table_column(table, label, *, key=None, default=None):
"""Add a DataTable column that sizes itself from header and cell content."""
return table.add_column(label, key=key, default=default)
def update_table_column_label(table, column_key, label) -> None:
"""Update a column label and keep auto-width columns in sync with it."""
column = table.columns.get(column_key)
if column is None:
return
text_label = Text.from_markup(label) if isinstance(label, str) else label
column.label = text_label
if column.auto_width:
measured = measure_renderables(
table.app.console,
table.app.console.options,
[text_label],
).maximum
column.content_width = max(column.content_width, measured)
table.refresh()
def build_screen_log_pane() -> ResizableScreenLogPane | Static:
"""Create a shared collapsible log pane for screen-local diagnostics."""
if not is_screen_log_pane_enabled():
hidden = Static("", id=f"{SCREEN_LOG_PANE_ID}_disabled")
hidden.display = False
return hidden
return ResizableScreenLogPane()
def toggle_screen_log_pane(screen) -> bool:
"""Toggle the current screen log pane when present."""
try:
logPane = screen.query_one(f"#{SCREEN_LOG_PANE_ID}", Collapsible)
except Exception:
return False
logPane.collapsed = not bool(logPane.collapsed)
return True
def write_screen_log(screen, message: str) -> bool:
"""Append a line to the current screen log pane when present."""
if message is None:
return False
text = str(message).strip()
if not text:
return False
try:
logView = screen.query_one(f"#{SCREEN_LOG_VIEW_ID}", RichLog)
except Exception:
return False
logView.write(text)
return True
def go_back_or_exit(screen) -> None:
"""Pop the current screen when possible, otherwise exit the app."""
screen_stack = getattr(screen.app, "screen_stack", ())
if len(screen_stack) > 2:
screen.app.pop_screen()
return
screen.app.exit()

View File

@@ -2,11 +2,30 @@ from textual.app import ComposeResult
from textual.screen import Screen
from textual.widgets import Footer, Placeholder
from .i18n import t
from .screen_support import build_screen_log_pane, go_back_or_exit
class SettingsScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
def __init__(self):
super().__init__()
context = self.app.getContext()
def compose(self) -> ComposeResult:
yield Placeholder("Settings Screen")
# Row 1
yield Placeholder(t("Settings Screen"))
yield build_screen_log_pane()
yield Footer()
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
def action_back(self):
go_back_or_exit(self)

View File

@@ -383,10 +383,27 @@ class ShiftedSeasonController:
session.close()
def shiftSeason(self, showId, season, episode, patternId=None):
if season == -1 or episode == -1:
return season, episode
shiftedSeason, shiftedEpisode, sourceLabel = self.resolveShiftSeason(
showId,
season,
episode,
patternId=patternId,
)
if shiftedSeason != season or shiftedEpisode != episode:
self.context['logger'].info(
f"Setting season shift {season}/{episode} -> {shiftedSeason}/{shiftedEpisode} from {sourceLabel}"
)
return shiftedSeason, shiftedEpisode
def resolveShiftSeason(self, showId, season, episode, patternId=None):
if season == -1 or episode == -1:
return season, episode, "unrecognized"
session = None
try:
session = self.Session()
@@ -420,12 +437,7 @@ class ShiftedSeasonController:
if activeShift.getPatternId() is not None
else "show"
)
self.context['logger'].info(
f"Setting season shift {season}/{episode} -> {shiftedSeason}/{shiftedEpisode} from {sourceLabel}"
)
return shiftedSeason, shiftedEpisode
return shiftedSeason, shiftedEpisode, sourceLabel
except ShiftedSeasonOwnerException as ex:
raise click.ClickException(str(ex))

View File

@@ -4,7 +4,9 @@ from textual.screen import Screen
from textual.widgets import Header, Footer, Static, Button
from textual.containers import Grid
from .i18n import t
from .shifted_season_controller import ShiftedSeasonController
from .screen_support import build_screen_log_pane, go_back_or_exit
from ffx.model.shifted_season import ShiftedSeason
@@ -12,15 +14,22 @@ from ffx.model.shifted_season import ShiftedSeason
# Screen[dict[int, str, int]]
class ShiftedSeasonDeleteScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
CSS = """
Grid {
grid-size: 2;
grid-rows: 2 auto;
grid-columns: 30 330;
grid-columns: 18 5fr;
height: 100%;
width: 100%;
min-width: 90;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
Input {
@@ -58,12 +67,15 @@ class ShiftedSeasonDeleteScreen(Screen):
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
shiftedSeason: ShiftedSeason = self.__ssc.getShiftedSeason(self.__shiftedSeasonId)
ownerLabel = (
f"pattern #{self._patternId}"
t("pattern #{id}", id=self._patternId)
if self._patternId is not None
else f"show #{self._showId}"
else t("show #{id}", id=self._showId)
)
self.query_one("#static_owner", Static).update(ownerLabel)
self.query_one("#static_original_season", Static).update(str(shiftedSeason.getOriginalSeason()))
@@ -78,36 +90,47 @@ class ShiftedSeasonDeleteScreen(Screen):
yield Header()
with Grid():
# Row 1
yield Static(t("Are you sure to delete the following shifted season?"), id="toplabel", classes="two")
yield Static("Are you sure to delete the following shifted season?", id="toplabel", classes="two")
# Row 2
yield Static(" ", classes="two")
yield Static("from")
# Row 3
yield Static(t("from"))
yield Static(" ", id="static_owner")
# Row 4
yield Static(" ", classes="two")
yield Static("Source season")
# Row 5
yield Static(t("Source Season"))
yield Static(" ", id="static_original_season")
yield Static("First episode")
# Row 6
yield Static(t("First episode"))
yield Static(" ", id="static_first_episode")
yield Static("Last episode")
# Row 7
yield Static(t("Last episode"))
yield Static(" ", id="static_last_episode")
yield Static("Season offset")
# Row 8
yield Static(t("Season Offset"))
yield Static(" ", id="static_season_offset")
yield Static("Episode offset")
# Row 9
yield Static(t("Episode offset"))
yield Static(" ", id="static_episode_offset")
# Row 10
yield Static(" ", classes="two")
yield Button("Delete", id="delete_button")
yield Button("Cancel", id="cancel_button")
# Row 11
yield Button(t("Delete"), id="delete_button")
yield Button(t("Cancel"), id="cancel_button")
yield build_screen_log_pane()
yield Footer()
@@ -128,3 +151,6 @@ class ShiftedSeasonDeleteScreen(Screen):
if event.button.id == "cancel_button":
self.app.pop_screen()
def action_back(self):
go_back_or_exit(self)

View File

@@ -4,7 +4,9 @@ from textual.screen import Screen
from textual.widgets import Header, Footer, Static, Button, Input
from textual.containers import Grid
from .i18n import t
from .shifted_season_controller import ShiftedSeasonController
from .screen_support import build_screen_log_pane, go_back_or_exit
from ffx.model.shifted_season import ShiftedSeason
@@ -12,15 +14,22 @@ from ffx.model.shifted_season import ShiftedSeason
# Screen[dict[int, str, int]]
class ShiftedSeasonDetailsScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
CSS = """
Grid {
grid-size: 3 10;
grid-rows: 2 2 2 2 2 2 2 2 2 2;
grid-columns: 40 40 40;
grid-columns: 20 1fr 1fr;
height: 100%;
width: 100%;
min-width: 80;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
Input {
@@ -100,6 +109,9 @@ class ShiftedSeasonDetailsScreen(Screen):
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
if self.__shiftedSeasonId is not None:
shiftedSeason: ShiftedSeason = self.__ssc.getShiftedSeason(self.__shiftedSeasonId)
@@ -125,43 +137,48 @@ class ShiftedSeasonDetailsScreen(Screen):
with Grid():
# 1
yield Static("Edit shifted season" if self.__shiftedSeasonId is not None else "New shifted season", id="toplabel", classes="three")
# Row 1
yield Static(
t("Edit shifted season") if self.__shiftedSeasonId is not None else t("New shifted season"),
id="toplabel",
classes="three",
)
# 2
# Row 2
yield Static(" ", classes="three")
# 3
yield Static("Source season")
# Row 3
yield Static(t("Source Season"))
yield Input(id="input_original_season", classes="two")
# 4
yield Static("First Episode")
# Row 4
yield Static(t("First Episode"))
yield Input(id="input_first_episode", classes="two")
# 5
yield Static("Last Episode")
# Row 5
yield Static(t("Last Episode"))
yield Input(id="input_last_episode", classes="two")
# 6
yield Static("Season offset")
# Row 6
yield Static(t("Season Offset"))
yield Input(id="input_season_offset", classes="two")
# 7
yield Static("Episode offset")
# Row 7
yield Static(t("Episode offset"))
yield Input(id="input_episode_offset", classes="two")
# 8
# Row 8
yield Static(" ", classes="three")
# 9
yield Button("Save", id="save_button")
yield Button("Cancel", id="cancel_button")
# Row 9
yield Button(t("Save"), id="save_button")
yield Button(t("Cancel"), id="cancel_button")
yield Static(" ")
# 10
# Row 10
yield Static(" ", classes="three")
yield build_screen_log_pane()
yield Footer()
@@ -196,6 +213,9 @@ class ShiftedSeasonDetailsScreen(Screen):
return shiftedSeasonObj
def action_back(self):
go_back_or_exit(self)
# Event handler for button press
def on_button_pressed(self, event: Button.Pressed) -> None:

View File

@@ -2,20 +2,29 @@ from textual.screen import Screen
from textual.widgets import Header, Footer, Static, Button
from textual.containers import Grid
from .i18n import t
from .show_controller import ShowController
from .screen_support import build_screen_log_pane, go_back_or_exit
# Screen[dict[int, str, int]]
class ShowDeleteScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
CSS = """
Grid {
grid-size: 2;
grid-rows: 2 auto;
grid-columns: 30 auto;
grid-columns: 18 4fr;
height: 100%;
width: 100%;
min-width: 80;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
Input {
@@ -59,22 +68,28 @@ class ShowDeleteScreen(Screen):
yield Header()
with Grid():
# Row 1
yield Static(t("Are you sure to delete the following show?"), id="toplabel", classes="two")
yield Static("Are you sure to delete the following show?", id="toplabel", classes="two")
# Row 2
yield Static("", classes="two")
# Row 3
yield Static("", id="showlabel")
yield Static("")
# Row 4
yield Static("", classes="two")
# Row 5
yield Static("", classes="two")
yield Button("Delete", id="delete_button")
yield Button("Cancel", id="cancel_button")
# Row 6
yield Button(t("Delete"), id="delete_button")
yield Button(t("Cancel"), id="cancel_button")
yield build_screen_log_pane()
yield Footer()
@@ -93,3 +108,13 @@ class ShowDeleteScreen(Screen):
if event.button.id == "cancel_button":
self.app.pop_screen()
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
def action_back(self):
go_back_or_exit(self)

View File

@@ -16,7 +16,14 @@ from .shifted_season_delete_screen import ShiftedSeasonDeleteScreen
from ffx.model.shifted_season import ShiftedSeason
from .helper import filterFilename
from .screen_support import build_screen_bootstrap, build_screen_controllers
from .i18n import t
from .screen_support import (
add_auto_table_column,
build_screen_bootstrap,
build_screen_controllers,
build_screen_log_pane,
go_back_or_exit,
)
# Screen[dict[int, str, int]]
@@ -25,12 +32,15 @@ class ShowDetailsScreen(Screen):
CSS = """
Grid {
grid-size: 5 18;
grid-rows: 2 2 2 2 2 2 6 2 2 2 2 2 2 9 2 9 2 2;
grid-columns: 30 30 30 30 30;
grid-size: 5 19;
grid-rows: 2 2 2 2 2 2 6 2 2 2 2 2 2 2 9 2 9 2 2;
grid-columns: 25 20 20 20 1fr;
height: 100%;
width: 100%;
min-width: 110;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
Input {
@@ -43,6 +53,7 @@ class ShowDetailsScreen(Screen):
DataTable {
column-span: 2;
min-height: 8;
width: 100%;
}
DataTable .datatable--cursor {
@@ -84,9 +95,10 @@ class ShowDetailsScreen(Screen):
"""
BINDINGS = [
("a", "add_pattern", "Add Pattern"),
("e", "edit_pattern", "Edit Pattern"),
("r", "remove_pattern", "Remove Pattern"),
("escape", "back", t("Back")),
("a", "add_pattern", t("Add Pattern")),
("e", "edit_pattern", t("Edit Pattern")),
("r", "remove_pattern", t("Remove Pattern")),
]
def __init__(self, showId = None):
@@ -108,12 +120,45 @@ class ShowDetailsScreen(Screen):
self.__ssc = controllers['shifted_season']
self.__showDescriptor = self.__sc.getShowDescriptor(showId) if showId is not None else None
self.__patternRowData: dict[object, dict[str, object]] = {}
self.__shiftedSeasonRowData: dict[object, dict[str, int | None]] = {}
def _add_pattern_row(self, *, pattern_id: int | None, pattern_text: str):
row_key = self.patternTable.add_row(str(pattern_text))
self.__patternRowData[row_key] = {
'id': pattern_id,
'show_id': self.__showDescriptor.getId() if self.__showDescriptor is not None else None,
'pattern': str(pattern_text),
}
return row_key
def _add_shifted_season_row(self, shifted_season_obj: dict[str, int | None]):
firstEpisode = shifted_season_obj['first_episode']
firstEpisodeStr = str(firstEpisode) if firstEpisode != -1 else ''
lastEpisode = shifted_season_obj['last_episode']
lastEpisodeStr = str(lastEpisode) if lastEpisode != -1 else ''
row = (
shifted_season_obj['original_season'],
firstEpisodeStr,
lastEpisodeStr,
shifted_season_obj['season_offset'],
shifted_season_obj['episode_offset'],
)
row_key = self.shiftedSeasonsTable.add_row(*map(str, row))
self.__shiftedSeasonRowData[row_key] = dict(shifted_season_obj)
return row_key
def updateShiftedSeasons(self):
self.shiftedSeasonsTable.clear()
self.__shiftedSeasonRowData = {}
if not self.__showDescriptor is None:
@@ -123,25 +168,16 @@ class ShowDetailsScreen(Screen):
for shiftedSeason in self.__ssc.getShiftedSeasonSiblings(showId=showId):
shiftedSeasonObj = shiftedSeason.getObj()
firstEpisode = shiftedSeasonObj['first_episode']
firstEpisodeStr = str(firstEpisode) if firstEpisode != -1 else ''
lastEpisode = shiftedSeasonObj['last_episode']
lastEpisodeStr = str(lastEpisode) if lastEpisode != -1 else ''
row = (shiftedSeasonObj['original_season'],
firstEpisodeStr,
lastEpisodeStr,
shiftedSeasonObj['season_offset'],
shiftedSeasonObj['episode_offset'])
self.shiftedSeasonsTable.add_row(*map(str, row))
shiftedSeasonObj['id'] = shiftedSeason.getId()
self._add_shifted_season_row(shiftedSeasonObj)
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
if self.__showDescriptor is not None:
showId = int(self.__showDescriptor.getId())
@@ -162,8 +198,10 @@ class ShowDetailsScreen(Screen):
#raise click.ClickException(f"show_id {showId}")
for pattern in self.__pc.getPatternsForShow(showId):
row = (pattern.getPattern(),)
self.patternTable.add_row(*map(str, row))
self._add_pattern_row(
pattern_id=pattern.getId(),
pattern_text=pattern.getPattern(),
)
self.updateShiftedSeasons()
@@ -195,10 +233,7 @@ class ShowDetailsScreen(Screen):
row_key, col_key = self.patternTable.coordinate_to_cell_key(self.patternTable.cursor_coordinate)
if row_key is not None:
selected_row_data = self.patternTable.get_row(row_key)
selectedPattern['show_id'] = self.__showDescriptor.getId()
selectedPattern['pattern'] = str(selected_row_data[0])
selectedPattern = dict(self.__patternRowData.get(row_key, {}))
except CellDoesNotExist:
pass
@@ -217,31 +252,7 @@ class ShowDetailsScreen(Screen):
row_key, col_key = self.shiftedSeasonsTable.coordinate_to_cell_key(self.shiftedSeasonsTable.cursor_coordinate)
if row_key is not None:
selected_row_data = self.shiftedSeasonsTable.get_row(row_key)
def parse_int_or_default(value: str, default: int) -> int:
try:
return int(value)
except (TypeError, ValueError):
return default
shiftedSeasonObj['original_season'] = int(selected_row_data[0])
shiftedSeasonObj['first_episode'] = parse_int_or_default(selected_row_data[1], -1)
shiftedSeasonObj['last_episode'] = parse_int_or_default(selected_row_data[2], -1)
shiftedSeasonObj['season_offset'] = parse_int_or_default(selected_row_data[3], 0)
shiftedSeasonObj['episode_offset'] = parse_int_or_default(selected_row_data[4], 0)
if self.__showDescriptor is not None:
showId = int(self.__showDescriptor.getId())
shiftedSeasonId = self.__ssc.findShiftedSeason(showId,
originalSeason=shiftedSeasonObj['original_season'],
firstEpisode=shiftedSeasonObj['first_episode'],
lastEpisode=shiftedSeasonObj['last_episode'])
if shiftedSeasonId is not None:
shiftedSeasonObj['id'] = shiftedSeasonId
shiftedSeasonObj = dict(self.__shiftedSeasonRowData.get(row_key, {}))
except CellDoesNotExist:
pass
@@ -255,9 +266,14 @@ class ShowDetailsScreen(Screen):
def handle_add_pattern(self, screenResult):
if screenResult is None:
return
pattern = (screenResult['pattern'],)
self.patternTable.add_row(*map(str, pattern))
pattern_id = self.__pc.findPattern(screenResult)
self._add_pattern_row(
pattern_id=pattern_id,
pattern_text=screenResult['pattern'],
)
def action_edit_pattern(self):
@@ -265,8 +281,7 @@ class ShowDetailsScreen(Screen):
selectedPatternDescriptor = self.getSelectedPatternDescriptor()
if selectedPatternDescriptor:
selectedPatternId = self.__pc.findPattern(selectedPatternDescriptor)
selectedPatternId = selectedPatternDescriptor.get('id')
if selectedPatternId is None:
raise click.ClickException(f"ShowDetailsScreen.action_edit_pattern(): Pattern to edit has no id")
@@ -280,6 +295,8 @@ class ShowDetailsScreen(Screen):
row_key, col_key = self.patternTable.coordinate_to_cell_key(self.patternTable.cursor_coordinate)
self.patternTable.update_cell(row_key, self.column_key_pattern, screenResult['pattern'])
if row_key in self.__patternRowData:
self.__patternRowData[row_key]['pattern'] = str(screenResult['pattern'])
except CellDoesNotExist:
pass
@@ -291,7 +308,7 @@ class ShowDetailsScreen(Screen):
if selectedPatternDescriptor:
selectedPatternId = self.__pc.findPattern(selectedPatternDescriptor)
selectedPatternId = selectedPatternDescriptor.get('id')
if selectedPatternId is None:
raise click.ClickException(f"ShowDetailsScreen.action_remove_pattern(): Pattern to remove has no id")
@@ -304,6 +321,7 @@ class ShowDetailsScreen(Screen):
try:
row_key, col_key = self.patternTable.coordinate_to_cell_key(self.patternTable.cursor_coordinate)
self.patternTable.remove_row(row_key)
self.__patternRowData.pop(row_key, None)
except CellDoesNotExist:
pass
@@ -315,18 +333,18 @@ class ShowDetailsScreen(Screen):
self.patternTable = DataTable(classes="five")
# Define the columns with headers
self.column_key_pattern = self.patternTable.add_column("Pattern", width=150)
self.column_key_pattern = add_auto_table_column(self.patternTable, t("Pattern"))
self.patternTable.cursor_type = 'row'
self.shiftedSeasonsTable = DataTable(classes="five")
self.column_key_original_season = self.shiftedSeasonsTable.add_column("Source Season", width=30)
self.column_key_first_episode = self.shiftedSeasonsTable.add_column("First Episode", width=30)
self.column_key_last_episode = self.shiftedSeasonsTable.add_column("Last Episode", width=30)
self.column_key_season_offset = self.shiftedSeasonsTable.add_column("Season Offset", width=30)
self.column_key_episode_offset = self.shiftedSeasonsTable.add_column("Episode Offset", width=30)
self.column_key_original_season = add_auto_table_column(self.shiftedSeasonsTable, t("Source Season"))
self.column_key_first_episode = add_auto_table_column(self.shiftedSeasonsTable, t("First Episode"))
self.column_key_last_episode = add_auto_table_column(self.shiftedSeasonsTable, t("Last Episode"))
self.column_key_season_offset = add_auto_table_column(self.shiftedSeasonsTable, t("Season Offset"))
self.column_key_episode_offset = add_auto_table_column(self.shiftedSeasonsTable, t("Episode Offset"))
self.shiftedSeasonsTable.cursor_type = 'row'
@@ -335,84 +353,91 @@ class ShowDetailsScreen(Screen):
with Grid():
# 1
yield Static("Show" if not self.__showDescriptor is None else "New Show", id="toplabel")
yield Button("Identify", id="identify_button")
# Row 1
yield Static(t("Show") if not self.__showDescriptor is None else t("New Show"), id="toplabel")
yield Button(t("Identify"), id="identify_button")
yield Static(" ", classes="three")
# 2
yield Static("ID")
# Row 2
yield Static(t("ID"))
if not self.__showDescriptor is None:
yield Static("", id="id_static", classes="four")
else:
yield Input(type="integer", id="id_input", classes="four")
# 3
yield Static("Name")
# Row 3
yield Static(t("Name"))
yield Input(type="text", id="name_input", classes="four")
# 4
yield Static("Year")
# Row 4
yield Static(t("Year"))
yield Input(type="integer", id="year_input", classes="four")
#5
yield Static("Quality")
# Row 5
yield Static(t("Quality"))
yield Input(type="integer", id="quality_input", classes="four")
#6
yield Static("Notes")
# Row 6
yield Static(t("Notes"))
yield Static(" ", classes="four")
#7
# Row 7
yield TextArea(id="notes_textarea", classes="five note_box")
#8
yield Static("Index Season Digits")
yield Input(type="integer", id="index_season_digits_input", classes="four")
#9
yield Static("Index Episode Digits")
yield Input(type="integer", id="index_episode_digits_input", classes="four")
#10
yield Static("Indicator Season Digits")
yield Input(type="integer", id="indicator_season_digits_input", classes="four")
#11
yield Static("Indicator Edisode Digits")
yield Input(type="integer", id="indicator_episode_digits_input", classes="four")
# 12
# Row 8
yield Static(" ", classes="five")
# 13
yield Static("Shifted seasons", classes="two")
# Row 9
yield Static(t("Index Season Digits"))
yield Input(type="integer", id="index_season_digits_input", classes="four")
# Row 10
yield Static(t("Index Episode Digits"))
yield Input(type="integer", id="index_episode_digits_input", classes="four")
# Row 11
yield Static(t("Indicator Season Digits"))
yield Input(type="integer", id="indicator_season_digits_input", classes="four")
# Row 12
yield Static(t("Indicator Edisode Digits"))
yield Input(type="integer", id="indicator_episode_digits_input", classes="four")
# Row 13
yield Static(" ", classes="five")
# Row 14
yield Static(t("Numbering Mapping"))
if self.__showDescriptor is not None:
yield Button("Add", id="button_add_shifted_season")
yield Button("Edit", id="button_edit_shifted_season")
yield Button("Delete", id="button_delete_shifted_season")
yield Button(t("Add"), id="button_add_shifted_season")
yield Button(t("Edit"), id="button_edit_shifted_season")
yield Button(t("Delete"), id="button_delete_shifted_season")
else:
yield Static(" ")
yield Static(" ")
yield Static(" ")
# 14
yield Static(" ")
# Row 15
yield self.shiftedSeasonsTable
# 15
yield Static("File patterns", classes="five")
# 16
# Row 16
yield Static(t("File patterns"), classes="five")
# Row 17
yield self.patternTable
# 17
# Row 18
yield Static(" ", classes="five")
# 18
yield Button("Save", id="save_button")
yield Button("Cancel", id="cancel_button")
# Row 19
yield Button(t("Save"), id="save_button")
yield Button(t("Cancel"), id="cancel_button")
yield build_screen_log_pane()
yield Footer()
@@ -511,3 +536,6 @@ class ShowDetailsScreen(Screen):
def handle_delete_shifted_season(self, screenResult):
self.updateShiftedSeasons()
def action_back(self):
go_back_or_exit(self)

View File

@@ -1,8 +1,16 @@
from textual.screen import Screen
from textual.widgets import Header, Footer, Static, DataTable
from textual.containers import Grid
from rich.text import Text
from .i18n import t
from .show_controller import ShowController
from .screen_support import (
add_auto_table_column,
build_screen_log_pane,
go_back_or_exit,
update_table_column_label,
)
from .show_details_screen import ShowDetailsScreen
from .show_delete_screen import ShowDeleteScreen
@@ -21,7 +29,10 @@ class ShowsScreen(Screen):
grid-rows: 2 auto;
height: 100%;
width: 100%;
min-width: 80;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
DataTable .datatable--cursor {
@@ -49,12 +60,17 @@ class ShowsScreen(Screen):
height: 100%;
border: solid green;
}
DataTable {
width: 100%;
}
"""
BINDINGS = [
("e", "edit_show", "Edit Show"),
("n", "new_show", "New Show"),
("d", "delete_show", "Delete Show"),
("escape", "back", t("Back")),
("e", "edit_show", t("Edit Show")),
("n", "new_show", t("New Show")),
("d", "delete_show", t("Delete Show")),
]
@@ -66,6 +82,78 @@ class ShowsScreen(Screen):
self.Session = self.context['database']['session'] # convenience
self.__sc = ShowController(context = self.context)
self.__showRowData: dict[object, ShowDescriptor] = {}
self.__sortColumnKey = None
self.__sortReverse = False
self.__columnLabels: dict[object, str] = {}
def _add_show_row(self, show_descriptor: ShowDescriptor):
row_key = self.table.add_row(
str(show_descriptor.getId()),
str(show_descriptor.getName()),
str(show_descriptor.getYear()),
)
self.__showRowData[row_key] = show_descriptor
return row_key
def _get_selected_row_key(self):
try:
row_key, _ = self.table.coordinate_to_cell_key(self.table.cursor_coordinate)
return row_key
except CellDoesNotExist:
return None
def _move_cursor_to_row_key(self, row_key):
if row_key is None:
return
try:
row_index = int(self.table.get_row_index(row_key))
except Exception:
return
self.table.move_cursor(row=row_index)
def _sort_key_for_column(self, column_key):
if column_key == self.column_key_id:
return lambda value: int(value)
if column_key == self.column_key_year:
return lambda value: int(value)
if column_key == self.column_key_name:
return lambda value: str(value).casefold()
return None
def _update_header_labels(self):
if not hasattr(self, "table"):
return
arrow_up = ""
arrow_down = ""
for column_key, base_label in self.__columnLabels.items():
column = self.table.columns.get(column_key)
if column is None:
continue
label_text = base_label
if column_key == self.__sortColumnKey:
label_text = f"{base_label} {arrow_down if self.__sortReverse else arrow_up}"
update_table_column_label(self.table, column_key, Text(label_text))
def _apply_sort(self, *, preserve_row_key=None):
if self.__sortColumnKey is None:
self._update_header_labels()
return
self.table.sort(
self.__sortColumnKey,
key=self._sort_key_for_column(self.__sortColumnKey),
reverse=self.__sortReverse,
)
self._move_cursor_to_row_key(preserve_row_key)
self._update_header_labels()
def getSelectedShowId(self):
@@ -76,13 +164,29 @@ class ShowsScreen(Screen):
row_key, col_key = self.table.coordinate_to_cell_key(self.table.cursor_coordinate)
if row_key is not None:
selected_row_data = self.table.get_row(row_key)
return selected_row_data[0]
selected_show = self.__showRowData.get(row_key)
return selected_show.getId() if selected_show is not None else None
except CellDoesNotExist:
return None
def action_back(self):
go_back_or_exit(self)
def on_data_table_header_selected(self, event: DataTable.HeaderSelected) -> None:
if event.data_table is not self.table:
return
selected_row_key = self._get_selected_row_key()
if self.__sortColumnKey == event.column_key:
self.__sortReverse = not self.__sortReverse
else:
self.__sortColumnKey = event.column_key
self.__sortReverse = False
self._apply_sort(preserve_row_key=selected_row_key)
@@ -90,9 +194,9 @@ class ShowsScreen(Screen):
self.app.push_screen(ShowDetailsScreen(), self.handle_new_screen)
def handle_new_screen(self, screenResult):
show = (screenResult['id'], screenResult['name'], screenResult['year'])
self.table.add_row(*map(str, show))
if isinstance(screenResult, ShowDescriptor):
row_key = self._add_show_row(screenResult)
self._apply_sort(preserve_row_key=row_key)
def action_edit_show(self):
@@ -111,6 +215,8 @@ class ShowsScreen(Screen):
self.table.update_cell(row_key, self.column_key_name, showDescriptor.getName())
self.table.update_cell(row_key, self.column_key_year, showDescriptor.getYear())
self.__showRowData[row_key] = showDescriptor
self._apply_sort(preserve_row_key=row_key)
except CellDoesNotExist:
pass
@@ -131,15 +237,22 @@ class ShowsScreen(Screen):
try:
row_key, col_key = self.table.coordinate_to_cell_key(self.table.cursor_coordinate)
self.table.remove_row(row_key)
self.__showRowData.pop(row_key, None)
except CellDoesNotExist:
pass
def on_mount(self) -> None:
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
for show in self.__sc.getAllShows():
row = (int(show.id), show.name, show.year) # Convert each element to a string before adding
self.table.add_row(*map(str, row))
self._add_show_row(show.getDescriptor(self.context))
self.__sortColumnKey = self.column_key_name
self._apply_sort()
def compose(self):
@@ -148,21 +261,31 @@ class ShowsScreen(Screen):
self.table = DataTable()
# Define the columns with headers
self.column_key_id = self.table.add_column("ID", width=10)
self.column_key_name = self.table.add_column("Name", width=50)
self.column_key_year = self.table.add_column("Year", width=10)
idLabel = t("ID")
nameLabel = t("Name")
yearLabel = t("Year")
self.column_key_id = add_auto_table_column(self.table, idLabel)
self.column_key_name = add_auto_table_column(self.table, nameLabel)
self.column_key_year = add_auto_table_column(self.table, yearLabel)
self.__columnLabels = {
self.column_key_id: idLabel,
self.column_key_name: nameLabel,
self.column_key_year: yearLabel,
}
self.table.cursor_type = 'row'
yield Header()
with Grid():
# Row 1
yield Static(t("Shows"), markup=False)
yield Static("Shows")
# Row 2
yield self.table
f = Footer()
f.description = "yolo"
yield build_screen_log_pane()
yield f

View File

@@ -2,19 +2,29 @@ from textual.screen import Screen
from textual.widgets import Header, Footer, Static, Button
from textual.containers import Grid
from .i18n import t
from .screen_support import build_screen_log_pane, go_back_or_exit
# Screen[dict[int, str, int]]
class TagDeleteScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
CSS = """
Grid {
grid-size: 4 9;
grid-rows: 2 2 2 2 2 2 2 2 2;
grid-columns: 30 30 30 30;
grid-columns: 18 1fr 1fr 1fr;
height: 100%;
width: 100%;
min-width: 90;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
Input {
@@ -54,6 +64,9 @@ class TagDeleteScreen(Screen):
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
self.query_one("#keylabel", Static).update(str(self.__key))
self.query_one("#valuelabel", Static).update(str(self.__value))
@@ -64,24 +77,25 @@ class TagDeleteScreen(Screen):
with Grid():
#1
yield Static(f"Are you sure to delete this tag ?", id="toplabel", classes="five")
# Row 1
yield Static(t("Are you sure to delete this tag?"), id="toplabel", classes="five")
#2
yield Static("Key")
# Row 2
yield Static(t("Key"))
yield Static(" ", id="keylabel", classes="four")
#3
yield Static("Value")
# Row 3
yield Static(t("Value"))
yield Static(" ", id="valuelabel", classes="four")
#4
# Row 4
yield Static(" ", classes="five")
#9
yield Button("Delete", id="delete_button")
yield Button("Cancel", id="cancel_button")
# Row 5
yield Button(t("Delete"), id="delete_button")
yield Button(t("Cancel"), id="cancel_button")
yield build_screen_log_pane()
yield Footer()
@@ -96,3 +110,5 @@ class TagDeleteScreen(Screen):
if event.button.id == "cancel_button":
self.app.pop_screen()
def action_back(self):
go_back_or_exit(self)

View File

@@ -2,19 +2,29 @@ from textual.screen import Screen
from textual.widgets import Header, Footer, Static, Button, Input
from textual.containers import Grid
from .i18n import t
from .screen_support import build_screen_log_pane, go_back_or_exit
# Screen[dict[int, str, int]]
class TagDetailsScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
CSS = """
Grid {
grid-size: 5 20;
grid-rows: 2 2 2 2 2 3 2 2 2 2 2 6 2 2 6 2 2 2 2 6;
grid-columns: 25 25 25 25 225;
grid-columns: 18 1fr 1fr 1fr 5fr;
height: 100%;
width: 100%;
min-width: 100;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
Input {
@@ -77,6 +87,9 @@ class TagDetailsScreen(Screen):
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
if self.__key is not None:
self.query_one("#key_input", Input).value = str(self.__key)
@@ -90,26 +103,28 @@ class TagDetailsScreen(Screen):
with Grid():
# 8
yield Static("Key")
# Row 1
yield Static(t("Key"))
yield Input(id="key_input", classes="four")
yield Static("Value")
# Row 2
yield Static(t("Value"))
yield Input(id="value_input", classes="four")
# 17
# Row 3
yield Static(" ", classes="five")
# 18
yield Button("Save", id="save_button")
yield Button("Cancel", id="cancel_button")
# Row 4
yield Button(t("Save"), id="save_button")
yield Button(t("Cancel"), id="cancel_button")
# 19
# Row 5
yield Static(" ", classes="five")
# 20
# Row 6
yield Static(" ", classes="five", id="messagestatic")
yield build_screen_log_pane()
yield Footer(id="footer")
@@ -120,6 +135,9 @@ class TagDetailsScreen(Screen):
return (tagKey, tagValue)
def action_back(self):
go_back_or_exit(self)
# Event handler for button press
def on_button_pressed(self, event: Button.Pressed) -> None:

View File

@@ -3,20 +3,22 @@ from enum import Enum
class TrackCodec(Enum):
H265 = {'identifier': 'hevc', 'format': 'h265', 'extension': 'h265' ,'label': 'H.265'}
VP9 = {'identifier': 'vp9', 'format': 'ivf', 'extension': 'ivf' , 'label': 'VP9'}
H265 = {'identifier': 'hevc', 'format': None, 'extension': 'h265' ,'label': 'H.265'}
H264 = {'identifier': 'h264', 'format': 'h264', 'extension': 'h264' ,'label': 'H.264'}
MPEG4 = {'identifier': 'mpeg4', 'format': 'm4v', 'extension': 'm4v' ,'label': 'MPEG-4'}
MPEG2 = {'identifier': 'mpeg2video', 'format': 'mpeg2video', 'extension': 'mpg' ,'label': 'MPEG-2'}
OPUS = {'identifier': 'opus', 'format': 'opus', 'extension': 'opus' , 'label': 'Opus'}
AAC = {'identifier': 'aac', 'format': None, 'extension': 'aac' , 'label': 'AAC'}
AC3 = {'identifier': 'ac3', 'format': 'ac3', 'extension': 'ac3' , 'label': 'AC3'}
EAC3 = {'identifier': 'eac3', 'format': 'eac3', 'extension': 'eac3' , 'label': 'EAC3'}
DTS = {'identifier': 'dts', 'format': 'dts', 'extension': 'dts' , 'label': 'DTS'}
MP3 = {'identifier': 'mp3', 'format': 'mp3', 'extension': 'mp3' , 'label': 'MP3'}
WEBVTT = {'identifier': 'webvtt', 'format': 'webvtt', 'extension': 'vtt' , 'label': 'WebVTT'}
SRT = {'identifier': 'subrip', 'format': 'srt', 'extension': 'srt' , 'label': 'SRT'}
ASS = {'identifier': 'ass', 'format': 'ass', 'extension': 'ass' , 'label': 'ASS'}
TTF = {'identifier': 'ttf', 'format': None, 'extension': 'ttf' , 'label': 'TTF'}
PGS = {'identifier': 'hdmv_pgs_subtitle', 'format': 'sup', 'extension': 'sup' , 'label': 'PGS'}
VOBSUB = {'identifier': 'dvd_subtitle', 'format': None, 'extension': 'mkv' , 'label': 'VobSub'}

View File

@@ -43,7 +43,7 @@ class TrackController():
s = self.Session()
track = Track(pattern_id = patId,
track_type = int(trackDescriptor.getType().index()),
codec_name = str(trackDescriptor.getCodec().identifier()),
codec_name = str(trackDescriptor.getFormatDescriptor().identifier()),
index = int(trackDescriptor.getIndex()),
source_index = int(trackDescriptor.getSourceIndex()),
disposition_flags = int(TrackDisposition.toFlags(trackDescriptor.getDispositionSet())),
@@ -82,7 +82,7 @@ class TrackController():
track.index = int(trackDescriptor.getIndex())
track.track_type = int(trackDescriptor.getType().index())
track.codec_name = str(trackDescriptor.getCodec().identifier())
track.codec_name = str(trackDescriptor.getFormatDescriptor().identifier())
track.audio_layout = int(trackDescriptor.getAudioLayout().index())
track.disposition_flags = int(TrackDisposition.toFlags(trackDescriptor.getDispositionSet()))

View File

@@ -5,20 +5,29 @@ from textual.widgets import Header, Footer, Static, Button
from textual.containers import Grid
from ffx.track_descriptor import TrackDescriptor
from .i18n import t
from .screen_support import build_screen_log_pane, go_back_or_exit
# Screen[dict[int, str, int]]
class TrackDeleteScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
CSS = """
Grid {
grid-size: 4 9;
grid-rows: 2 2 2 2 2 2 2 2 2;
grid-columns: 30 30 30 30;
grid-columns: 18 1fr 1fr 1fr;
height: 100%;
width: 100%;
min-width: 90;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
Input {
@@ -58,6 +67,9 @@ class TrackDeleteScreen(Screen):
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
self.query_one("#subindexlabel", Static).update(str(self.__trackDescriptor.getSubIndex()))
self.query_one("#patternlabel", Static).update(str(self.__trackDescriptor.getPatternId()))
self.query_one("#languagelabel", Static).update(str(self.__trackDescriptor.getLanguage().label()))
@@ -70,38 +82,46 @@ class TrackDeleteScreen(Screen):
with Grid():
#1
yield Static(f"Are you sure to delete the following {self.__trackDescriptor.getType().label()} track?", id="toplabel", classes="four")
# Row 1
yield Static(
t(
"Are you sure to delete the following {track_type} track?",
track_type=t(self.__trackDescriptor.getType().label()),
),
id="toplabel",
classes="four",
)
#2
yield Static("sub index")
# Row 2
yield Static(t("sub index"))
yield Static(" ", id="subindexlabel", classes="three")
#3
yield Static("from pattern")
# Row 3
yield Static(t("from pattern"))
yield Static(" ", id="patternlabel", classes="three")
#4
# Row 4
yield Static(" ", classes="four")
#5
yield Static("Language")
# Row 5
yield Static(t("Language"))
yield Static(" ", id="languagelabel", classes="three")
#6
yield Static("Title")
# Row 6
yield Static(t("Title"))
yield Static(" ", id="titlelabel", classes="three")
#7
# Row 7
yield Static(" ", classes="four")
#8
# Row 8
yield Static(" ", classes="four")
#9
yield Button("Delete", id="delete_button")
yield Button("Cancel", id="cancel_button")
# Row 9
yield Button(t("Delete"), id="delete_button")
yield Button(t("Cancel"), id="cancel_button")
yield build_screen_log_pane()
yield Footer()
@@ -113,3 +133,6 @@ class TrackDeleteScreen(Screen):
if event.button.id == "cancel_button":
self.app.pop_screen()
def action_back(self):
go_back_or_exit(self)

View File

@@ -1,5 +1,6 @@
from typing import Self
from .attachment_format import AttachmentFormat
from .iso_language import IsoLanguage
from .track_type import TrackType
from .audio_layout import AudioLayout
@@ -26,6 +27,7 @@ class TrackDescriptor:
TRACK_TYPE_KEY = "track_type"
CODEC_KEY = "codec_name"
ATTACHMENT_FORMAT_KEY = "attachment_format"
AUDIO_LAYOUT_KEY = "audio_layout"
FFPROBE_INDEX_KEY = "index"
@@ -110,15 +112,6 @@ class TrackDescriptor:
else:
self.__trackType = TrackType.UNKNOWN
if TrackDescriptor.CODEC_KEY in kwargs.keys():
if type(kwargs[TrackDescriptor.CODEC_KEY]) is not TrackCodec:
raise TypeError(
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.CODEC_KEY} is required to be of type TrackCodec"
)
self.__trackCodec = kwargs[TrackDescriptor.CODEC_KEY]
else:
self.__trackCodec = TrackCodec.UNKNOWN
if TrackDescriptor.TAGS_KEY in kwargs.keys():
if type(kwargs[TrackDescriptor.TAGS_KEY]) is not dict:
raise TypeError(
@@ -151,6 +144,34 @@ class TrackDescriptor:
else:
self.__audioLayout = AudioLayout.LAYOUT_UNDEFINED
self.__trackCodec = TrackCodec.UNKNOWN
self.__attachmentFormat = AttachmentFormat.UNKNOWN
if self.__trackType == TrackType.ATTACHMENT:
if TrackDescriptor.ATTACHMENT_FORMAT_KEY in kwargs.keys():
if type(kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY]) is not AttachmentFormat:
raise TypeError(
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.ATTACHMENT_FORMAT_KEY} is required to be of type AttachmentFormat"
)
self.__attachmentFormat = kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY]
elif TrackDescriptor.CODEC_KEY in kwargs.keys():
legacyCodec = kwargs[TrackDescriptor.CODEC_KEY]
if type(legacyCodec) is AttachmentFormat:
self.__attachmentFormat = legacyCodec
elif type(legacyCodec) is TrackCodec:
self.__attachmentFormat = AttachmentFormat.fromTrackCodec(legacyCodec)
else:
raise TypeError(
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.CODEC_KEY} is required to be of type TrackCodec for legacy attachment compatibility"
)
else:
if TrackDescriptor.CODEC_KEY in kwargs.keys():
if type(kwargs[TrackDescriptor.CODEC_KEY]) is not TrackCodec:
raise TypeError(
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.CODEC_KEY} is required to be of type TrackCodec"
)
self.__trackCodec = kwargs[TrackDescriptor.CODEC_KEY]
@classmethod
def fromFfprobe(cls, streamObj, subIndex: int = -1):
"""Processes ffprobe stream data as array with elements according to the following example
@@ -215,7 +236,12 @@ class TrackDescriptor:
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = trackType
kwargs[TrackDescriptor.CODEC_KEY] = TrackCodec.identify(streamObj[TrackDescriptor.FFPROBE_CODEC_KEY])
if trackType == TrackType.ATTACHMENT:
kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY] = AttachmentFormat.identifyFfprobeStream(streamObj)
else:
kwargs[TrackDescriptor.CODEC_KEY] = TrackCodec.identify(
streamObj.get(TrackDescriptor.FFPROBE_CODEC_KEY)
)
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = (
{
@@ -277,6 +303,14 @@ class TrackDescriptor:
def getCodec(self) -> TrackCodec:
return self.__trackCodec
def getAttachmentFormat(self) -> AttachmentFormat:
return self.__attachmentFormat
def getFormatDescriptor(self):
if self.__trackType == TrackType.ATTACHMENT:
return self.__attachmentFormat
return self.__trackCodec
def getLanguage(self):
if "language" in self.__trackTags.keys():
return IsoLanguage.findThreeLetter(self.__trackTags["language"])
@@ -343,3 +377,29 @@ class TrackDescriptor:
def getExternalSourceFilePath(self):
return self.__externalSourceFilePath
def clone(self, context: dict | None = None):
kwargs = {
TrackDescriptor.ID_KEY: int(self.__trackId),
TrackDescriptor.PATTERN_ID_KEY: int(self.__patternId),
TrackDescriptor.EXTERNAL_SOURCE_FILE_PATH_KEY: str(self.__externalSourceFilePath),
TrackDescriptor.INDEX_KEY: int(self.__index),
TrackDescriptor.SOURCE_INDEX_KEY: int(self.__sourceIndex),
TrackDescriptor.SUB_INDEX_KEY: int(self.__subIndex),
TrackDescriptor.TRACK_TYPE_KEY: self.__trackType,
TrackDescriptor.TAGS_KEY: dict(self.__trackTags),
TrackDescriptor.DISPOSITION_SET_KEY: set(self.__dispositionSet),
TrackDescriptor.AUDIO_LAYOUT_KEY: self.__audioLayout,
}
if self.__trackType == TrackType.ATTACHMENT:
kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY] = self.__attachmentFormat
else:
kwargs[TrackDescriptor.CODEC_KEY] = self.__trackCodec
if context is not None:
kwargs[TrackDescriptor.CONTEXT_KEY] = context
elif self.__context:
kwargs[TrackDescriptor.CONTEXT_KEY] = self.__context
return TrackDescriptor(**kwargs)

View File

@@ -5,6 +5,7 @@ from textual.widgets import Header, Footer, Static, Button, SelectionList, Selec
from textual.containers import Grid
from textual.widgets._data_table import CellDoesNotExist
from .attachment_format import AttachmentFormat
from .audio_layout import AudioLayout
from .iso_language import IsoLanguage
from .tag_delete_screen import TagDeleteScreen
@@ -13,21 +14,34 @@ from .track_codec import TrackCodec
from .track_descriptor import TrackDescriptor
from .track_disposition import TrackDisposition
from .track_type import TrackType
from ffx.helper import formatRichColor, removeRichColor
from .i18n import t
from .screen_support import (
add_auto_table_column,
build_screen_bootstrap,
build_screen_log_pane,
go_back_or_exit,
populate_tag_table,
)
class TrackDetailsScreen(Screen):
BINDINGS = [
("escape", "back", t("Back")),
]
CSS = """
Grid {
grid-size: 5 24;
grid-rows: 2 2 2 2 2 3 3 2 2 3 2 2 2 2 2 6 2 2 6 2 2 2;
grid-columns: 25 25 25 25 125;
grid-columns: 18 1fr 1fr 1fr 4fr;
height: 100%;
width: 100%;
min-width: 115;
padding: 1;
overflow-x: auto;
overflow-y: auto;
}
Input {
@@ -46,6 +60,7 @@ class TrackDetailsScreen(Screen):
DataTable {
min-height: 6;
width: 100%;
}
DataTable .datatable--cursor {
@@ -95,31 +110,16 @@ class TrackDetailsScreen(Screen):
trackType: TrackType = None,
index=None,
subIndex=None,
metadata_only: bool = False,
):
super().__init__()
self.context = self.app.getContext()
bootstrap = build_screen_bootstrap(self.app.getContext())
self.context = bootstrap.context
self.__configurationData = self.context["config"].getData()
metadataConfiguration = (
self.__configurationData["metadata"]
if "metadata" in self.__configurationData.keys()
else {}
)
self.__removeTrackKeys = (
metadataConfiguration["streams"]["remove"]
if "streams" in metadataConfiguration.keys()
and "remove" in metadataConfiguration["streams"].keys()
else []
)
self.__ignoreTrackKeys = (
metadataConfiguration["streams"]["ignore"]
if "streams" in metadataConfiguration.keys()
and "ignore" in metadataConfiguration["streams"].keys()
else []
)
self.__removeTrackKeys = bootstrap.remove_track_keys
self.__ignoreTrackKeys = bootstrap.ignore_track_keys
self.__tagRowData: dict[object, tuple[str, str]] = {}
self.__isNew = trackDescriptor is None
self.__trackDescriptor = trackDescriptor
@@ -134,17 +134,25 @@ class TrackDetailsScreen(Screen):
)
self.__patternLabel = str(patternLabel)
self.__siblingTrackDescriptors = list(siblingTrackDescriptors or [])
self.__metadataOnly = bool(metadata_only)
self.__applyNormalization = bool(
self.context.get("apply_metadata_normalization", True)
)
if self.__isNew:
self.__trackType = trackType
self.__trackCodec = TrackCodec.UNKNOWN
self.__attachmentFormat = AttachmentFormat.UNKNOWN
self.__audioLayout = AudioLayout.LAYOUT_UNDEFINED
self.__index = index
self.__subIndex = subIndex
self.__draftTrackTags = {}
initial_language = IsoLanguage.UNDEFINED
initial_title = ""
else:
self.__trackType = trackDescriptor.getType()
self.__trackCodec = trackDescriptor.getCodec()
self.__attachmentFormat = trackDescriptor.getAttachmentFormat()
self.__audioLayout = trackDescriptor.getAudioLayout()
self.__index = trackDescriptor.getIndex()
self.__subIndex = trackDescriptor.getSubIndex()
@@ -153,6 +161,19 @@ class TrackDetailsScreen(Screen):
for key, value in trackDescriptor.getTags().items()
if key not in ("language", "title")
}
initial_language = trackDescriptor.getLanguage()
initial_title = trackDescriptor.getTitle()
initialTitleEmpty = not str(initial_title).strip()
self.__titleAutoManaged = bool(
initialTitleEmpty
and (
initial_language == IsoLanguage.UNDEFINED
or (self.__metadataOnly and self.__applyNormalization)
)
)
self.__suppressTitleChanged = False
self.__lastAutoTitle = ""
def _descriptor_refs_same_track(self, descriptor: TrackDescriptor) -> bool:
if self.__trackDescriptor is None:
@@ -166,21 +187,61 @@ class TrackDetailsScreen(Screen):
)
def updateTags(self):
self.__tagRowData = populate_tag_table(
self.trackTagsTable,
self.__draftTrackTags,
ignore_keys=self.__ignoreTrackKeys,
remove_keys=self.__removeTrackKeys,
)
self.trackTagsTable.clear()
@staticmethod
def build_language_options():
return [
(language.label(), language)
for language in sorted(
[language for language in IsoLanguage if language != IsoLanguage.UNDEFINED],
key=lambda language: language.label().casefold(),
)
]
for key, value in self.__draftTrackTags.items():
textColor = None
if key in self.__ignoreTrackKeys:
textColor = "blue"
if key in self.__removeTrackKeys:
textColor = "red"
@staticmethod
def language_select_value(language):
return Select.NULL if language == IsoLanguage.UNDEFINED else language
row = (formatRichColor(key, textColor), formatRichColor(value, textColor))
self.trackTagsTable.add_row(*map(str, row))
def _apply_auto_title_for_language(self, language: IsoLanguage):
titleInput = self.query_one("#title_input", Input)
autoTitle = "" if language == IsoLanguage.UNDEFINED else language.label()
self.__suppressTitleChanged = True
titleInput.value = autoTitle
self.__suppressTitleChanged = False
self.__lastAutoTitle = autoTitle
def _handle_language_selection_changed(self, language):
if not self.__titleAutoManaged:
return
if not isinstance(language, IsoLanguage):
language = IsoLanguage.UNDEFINED
self._apply_auto_title_for_language(language)
def _handle_title_input_changed(self, titleValue: str):
if self.__suppressTitleChanged or not self.__titleAutoManaged:
return
language = self.query_one("#language_select", Select).value
if not isinstance(language, IsoLanguage):
language = IsoLanguage.UNDEFINED
expectedAutoTitle = "" if language == IsoLanguage.UNDEFINED else language.label()
if str(titleValue) != expectedAutoTitle:
self.__titleAutoManaged = False
def on_mount(self):
if getattr(self, 'context', {}).get('debug', False):
self.title = f"{self.app.title} - {self.__class__.__name__}"
self.query_one("#index_label", Static).update(
str(self.__index) if self.__index is not None else "-"
)
@@ -190,9 +251,9 @@ class TrackDetailsScreen(Screen):
self.query_one("#pattern_label", Static).update(self.__patternLabel)
if self.__trackType is not None:
self.query_one("#type_select", Select).value = self.__trackType.label()
self.query_one("#type_select", Select).value = self.__trackType
self.query_one("#audio_layout_select", Select).value = self.__audioLayout.label()
self.query_one("#audio_layout_select", Select).value = self.__audioLayout
for disposition in TrackDisposition:
@@ -202,7 +263,7 @@ class TrackDetailsScreen(Screen):
)
dispositionOption = (
disposition.label(),
t(disposition.label()),
disposition.index(),
dispositionIsSet,
)
@@ -211,101 +272,144 @@ class TrackDetailsScreen(Screen):
)
if self.__trackDescriptor is not None:
self.query_one("#language_select", Select).value = (
self.__trackDescriptor.getLanguage().label()
self.query_one("#language_select", Select).value = self.language_select_value(
self.__trackDescriptor.getLanguage()
)
self.query_one("#title_input", Input).value = self.__trackDescriptor.getTitle()
if self.__titleAutoManaged and not self.__trackDescriptor.getTitle().strip():
self._apply_auto_title_for_language(self.__trackDescriptor.getLanguage())
self.updateTags()
if self.__metadataOnly:
self.query_one("#type_select", Select).disabled = True
self.query_one("#audio_layout_select", Select).disabled = True
def on_select_changed(self, event: Select.Changed) -> None:
if event.select.id == "language_select":
self._handle_language_selection_changed(event.value)
def on_input_changed(self, event: Input.Changed) -> None:
if event.input.id == "title_input":
self._handle_title_input_changed(event.value)
def compose(self):
self.trackTagsTable = DataTable(classes="five")
self.column_key_track_tag_key = self.trackTagsTable.add_column("Key", width=50)
self.column_key_track_tag_value = self.trackTagsTable.add_column("Value", width=100)
self.column_key_track_tag_key = add_auto_table_column(self.trackTagsTable, t("Key"))
self.column_key_track_tag_value = add_auto_table_column(self.trackTagsTable, t("Value"))
self.trackTagsTable.cursor_type = "row"
languages = [language.label() for language in IsoLanguage]
yield Header()
with Grid():
# Row 1
yield Static(
"New stream" if self.__isNew else "Edit stream",
t("New stream") if self.__isNew else t("Edit stream"),
id="toplabel",
classes="five",
)
yield Static("for pattern")
# Row 2
yield Static(t("for pattern"))
yield Static("", id="pattern_label", classes="four", markup=False)
# Row 3
yield Static(" ", classes="five")
yield Static("Index / Subindex")
# Row 4
yield Static(t("Index / Subindex"))
yield Static("", id="index_label", classes="two")
yield Static("", id="subindex_label", classes="two")
# Row 5
yield Static(" ", classes="five")
yield Static("Type")
yield Select.from_values(
[trackType.label() for trackType in TrackType],
# Row 6
yield Static(t("Type"))
yield Select(
[(t(trackType.label()), trackType) for trackType in TrackType],
classes="four",
id="type_select",
)
yield Static("Audio Layout")
yield Select.from_values(
[layout.label() for layout in AudioLayout],
# Row 7
yield Static(t("Audio Layout"))
yield Select(
[(t(layout.label()), layout) for layout in AudioLayout],
classes="four",
id="audio_layout_select",
)
# Row 8
yield Static(" ", classes="five")
# Row 9
yield Static(" ", classes="five")
yield Static("Language")
yield Select.from_values(languages, classes="four", id="language_select")
# Row 10
yield Static(t("Language"))
yield Select(
self.build_language_options(),
prompt=t("Select"),
classes="four",
id="language_select",
)
# Row 11
yield Static(" ", classes="five")
yield Static("Title")
# Row 12
yield Static(t("Title"))
yield Input(id="title_input", classes="four")
# Row 13
yield Static(" ", classes="five")
# Row 14
yield Static(" ", classes="five")
yield Static("Stream tags")
# Row 15
yield Static(t("Stream tags"))
yield Static(" ")
yield Button("Add", id="button_add_stream_tag")
yield Button("Edit", id="button_edit_stream_tag")
yield Button("Delete", id="button_delete_stream_tag")
yield Button(t("Add"), id="button_add_stream_tag")
yield Button(t("Edit"), id="button_edit_stream_tag")
yield Button(t("Delete"), id="button_delete_stream_tag")
# Row 16
yield self.trackTagsTable
# Row 17
yield Static(" ", classes="five")
yield Static("Stream dispositions", classes="five")
# Row 18
yield Static(t("Stream dispositions"), classes="five")
# Row 19
yield SelectionList[int](
classes="five",
id="dispositions_selection_list",
)
yield Static(" ", classes="five")
# Row 20
yield Static(" ", classes="five")
yield Button("Save", id="save_button")
yield Button("Cancel", id="cancel_button")
# Row 21
yield Static(" ", classes="five")
# Row 22
yield Button(t("Save"), id="save_button")
yield Button(t("Cancel"), id="cancel_button")
# Row 23
yield Static(" ", classes="five")
# Row 24
yield Static(" ", classes="five", id="messagestatic")
yield build_screen_log_pane()
yield Footer(id="footer")
def getTrackDescriptorFromInput(self):
@@ -328,15 +432,21 @@ class TrackDetailsScreen(Screen):
if self.__subIndex is not None and int(self.__subIndex) >= 0:
kwargs[TrackDescriptor.SUB_INDEX_KEY] = int(self.__subIndex)
selectedTrackType = TrackType.fromLabel(
self.query_one("#type_select", Select).value
)
selectedTrackType = self.query_one("#type_select", Select).value
if not isinstance(selectedTrackType, TrackType):
selectedTrackType = TrackType.UNKNOWN
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = selectedTrackType
kwargs[TrackDescriptor.CODEC_KEY] = self.__trackCodec
if selectedTrackType == TrackType.ATTACHMENT:
kwargs[TrackDescriptor.ATTACHMENT_FORMAT_KEY] = self.__attachmentFormat
else:
kwargs[TrackDescriptor.CODEC_KEY] = self.__trackCodec
if selectedTrackType == TrackType.AUDIO:
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = AudioLayout.fromLabel(
self.query_one("#audio_layout_select", Select).value
selectedAudioLayout = self.query_one("#audio_layout_select", Select).value
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = (
selectedAudioLayout
if isinstance(selectedAudioLayout, AudioLayout)
else AudioLayout.LAYOUT_UNDEFINED
)
else:
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = AudioLayout.LAYOUT_UNDEFINED
@@ -344,8 +454,8 @@ class TrackDetailsScreen(Screen):
trackTags = dict(self.__draftTrackTags)
language = self.query_one("#language_select", Select).value
if language:
trackTags["language"] = IsoLanguage.find(language).threeLetter()
if isinstance(language, IsoLanguage):
trackTags["language"] = language.threeLetter()
title = self.query_one("#title_input", Input).value
if title:
@@ -362,6 +472,9 @@ class TrackDetailsScreen(Screen):
return TrackDescriptor(**kwargs)
def action_back(self):
go_back_or_exit(self)
def getSelectedTag(self):
try:
@@ -370,12 +483,7 @@ class TrackDetailsScreen(Screen):
)
if row_key is not None:
selected_tag_data = self.trackTagsTable.get_row(row_key)
tagKey = removeRichColor(selected_tag_data[0])
tagValue = removeRichColor(selected_tag_data[1])
return tagKey, tagValue
return self.__tagRowData.get(row_key)
return None
@@ -427,7 +535,9 @@ class TrackDetailsScreen(Screen):
):
self.query_one("#messagestatic", Static).update(
"Cannot add another stream with disposition flag 'default' or 'forced' set"
t(
"Cannot add another stream with disposition flag 'default' or 'forced' set"
)
)
else:
self.query_one("#messagestatic", Static).update(" ")

Binary file not shown.

Binary file not shown.

View File

@@ -7,6 +7,7 @@ import os
from pathlib import Path
import subprocess
import sys
from functools import lru_cache
from typing import Mapping
@@ -95,8 +96,69 @@ def write_vtt(path: Path, lines: tuple[str, ...]) -> Path:
return path
def create_source_fixture(workdir: Path, filename: str, tracks: list[SourceTrackSpec], duration_seconds: int = 1) -> Path:
@lru_cache(maxsize=None)
def _ffmpeg_encoder_is_available(encoder_name: str) -> bool:
completed = subprocess.run(
["ffmpeg", "-encoders"],
capture_output=True,
text=True,
)
if completed.returncode != 0:
return False
encoder_label = str(encoder_name).strip()
for line in completed.stdout.splitlines():
if not line.startswith(" "):
continue
tokens = line.split(maxsplit=2)
if len(tokens) >= 2 and tokens[1] == encoder_label:
return True
return False
def _resolve_fixture_video_encoder(
video_encoder: str,
video_encoder_options: tuple[str, ...],
) -> tuple[str, tuple[str, ...]]:
if video_encoder != "libx264":
return video_encoder, video_encoder_options
if _ffmpeg_encoder_is_available("libx264"):
return video_encoder, video_encoder_options
if _ffmpeg_encoder_is_available("libopenh264"):
# Keep fixture generation software-based when libx264 is missing.
return "libopenh264", ("-pix_fmt", "yuv420p")
return video_encoder, video_encoder_options
def create_source_fixture(
workdir: Path,
filename: str,
tracks: list[SourceTrackSpec],
duration_seconds: int = 1,
*,
video_encoder: str = "libx264",
video_encoder_options: tuple[str, ...] = (
"-preset",
"ultrafast",
"-crf",
"35",
"-pix_fmt",
"yuv420p",
),
audio_encoder: str = "aac",
audio_encoder_options: tuple[str, ...] = ("-b:a", "48k"),
subtitle_encoder: str = "webvtt",
) -> Path:
output_path = workdir / filename
video_encoder, video_encoder_options = _resolve_fixture_video_encoder(
video_encoder,
video_encoder_options,
)
has_video = any(track.track_type == TrackType.VIDEO for track in tracks)
has_audio = any(track.track_type == TrackType.AUDIO for track in tracks)
@@ -189,21 +251,16 @@ def create_source_fixture(workdir: Path, filename: str, tracks: list[SourceTrack
command += map_tokens
command += metadata_tokens
command += disposition_tokens
if has_video:
command += ["-c:v", video_encoder] + list(video_encoder_options)
if has_audio:
command += ["-c:a", audio_encoder] + list(audio_encoder_options)
if subtitle_input_indices:
command += ["-c:s", subtitle_encoder]
command += [
"-c:v",
"libx264",
"-preset",
"ultrafast",
"-crf",
"35",
"-pix_fmt",
"yuv420p",
"-c:a",
"aac",
"-b:a",
"48k",
"-c:s",
"webvtt",
"-t",
str(duration_seconds),
"-shortest",

View File

@@ -0,0 +1,211 @@
from __future__ import annotations
import os
from pathlib import Path
import sys
import tempfile
import unittest
from unittest.mock import patch
from click.testing import CliRunner
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx import cli # noqa: E402
from ffx.diagnostics import FfmpegSkipFileWarning, recordUnremediedIssue # noqa: E402
from ffx.logging_utils import get_ffx_logger # noqa: E402
class _FakeMediaDescriptor:
def getVideoTracks(self):
return []
def getAudioTracks(self):
return []
def getSubtitleTracks(self):
return []
def getAttachmentTracks(self):
return []
def applyOverrides(self, overrides):
return None
class _FakeFileProperties:
def __init__(self, context, source_path):
self.source_path = source_path
def getShowId(self):
return -1
def getSeason(self):
return -1
def getEpisode(self):
return -1
def getMediaDescriptor(self):
return _FakeMediaDescriptor()
def getPattern(self):
return None
class _FakeShiftedSeasonController:
def __init__(self, context):
self.context = context
def shiftSeason(self, show_id, season, episode, patternId=None):
return season, episode
class _FakeShowController:
def __init__(self, context):
self.context = context
def getShowDescriptor(self, show_id):
return None
class _FakeFfxController:
calls: list[str] = []
mode = "skip_first"
def __init__(self, context, *args, **kwargs):
self.context = context
def runJob(self, sourcePath, *args, **kwargs):
self.calls.append(sourcePath)
if self.mode == "clean":
return
if self.mode == "warn_unhandled" and sourcePath.endswith("episode1.avi"):
recordUnremediedIssue(
self.context,
sourcePath,
"unhandled-warning",
)
return
if self.mode == "skip_first" and sourcePath.endswith("episode1.avi"):
message = (
f"Skipping file {sourcePath}: ffmpeg still reported unset packet "
+ "timestamps after retry with -fflags +genpts."
)
recordUnremediedIssue(
self.context,
sourcePath,
"retry-with-generated-pts",
)
self.context["logger"].warning(message)
raise FfmpegSkipFileWarning(message)
class ConvertDiagnosticCliTests(unittest.TestCase):
def setUp(self):
logger = get_ffx_logger()
for handler in list(logger.handlers):
logger.removeHandler(handler)
try:
handler.close()
except Exception:
pass
self.tempdir = tempfile.TemporaryDirectory()
self.home_dir = Path(self.tempdir.name) / "home"
self.home_dir.mkdir()
self.database_path = Path(self.tempdir.name) / "test.db"
self.source_dir = Path(self.tempdir.name) / "source"
self.source_dir.mkdir()
self.source_one = self.source_dir / "episode1.avi"
self.source_two = self.source_dir / "episode2.avi"
self.source_one.write_bytes(b"one")
self.source_two.write_bytes(b"two")
_FakeFfxController.calls = []
_FakeFfxController.mode = "skip_first"
def tearDown(self):
self.tempdir.cleanup()
def test_convert_continues_after_skipping_one_file_due_to_ffmpeg_diagnostic(self):
runner = CliRunner()
with (
patch("ffx.file_properties.FileProperties", _FakeFileProperties),
patch("ffx.ffx_controller.FfxController", _FakeFfxController),
patch(
"ffx.shifted_season_controller.ShiftedSeasonController",
_FakeShiftedSeasonController,
),
patch("ffx.show_controller.ShowController", _FakeShowController),
):
result = runner.invoke(
cli.ffx,
[
"--database-file",
str(self.database_path),
"convert",
"--no-tmdb",
"--no-pattern",
str(self.source_one),
str(self.source_two),
],
env={**os.environ, "HOME": str(self.home_dir)},
)
self.assertEqual(0, result.exit_code, result.output)
self.assertEqual(
[str(self.source_one), str(self.source_two)],
_FakeFfxController.calls,
)
self.assertIn("Skipping file", result.output)
self.assertIn("-fflags +genpts", result.output)
self.assertIn("Files with ffmpeg findings that require review:", result.output)
self.assertIn(
"episode1.avi: retry-with-generated-pts",
result.output,
)
def test_convert_prints_clean_summary_when_no_unremedied_issues_were_seen(self):
runner = CliRunner()
_FakeFfxController.mode = "clean"
with (
patch("ffx.file_properties.FileProperties", _FakeFileProperties),
patch("ffx.ffx_controller.FfxController", _FakeFfxController),
patch(
"ffx.shifted_season_controller.ShiftedSeasonController",
_FakeShiftedSeasonController,
),
patch("ffx.show_controller.ShowController", _FakeShowController),
):
result = runner.invoke(
cli.ffx,
[
"--database-file",
str(self.database_path),
"convert",
"--no-tmdb",
"--no-pattern",
str(self.source_one),
str(self.source_two),
],
env={**os.environ, "HOME": str(self.home_dir)},
)
self.assertEqual(0, result.exit_code, result.output)
self.assertIn(
"All files converted with no issues.",
result.output,
)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,142 @@
from __future__ import annotations
import os
from pathlib import Path
import sys
import tempfile
import unittest
from unittest.mock import patch
from click.testing import CliRunner
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx import cli # noqa: E402
class _FakePattern:
def __init__(self, pattern_id: int):
self._pattern_id = pattern_id
def getId(self):
return self._pattern_id
class _FakeFileProperties:
def __init__(self, context, source_path):
self.source_path = source_path
def getShowId(self):
return 42 if self.source_path.endswith("mapped.mkv") else -1
def getSeason(self):
if self.source_path.endswith("unknown.mkv"):
return -1
return 1
def getEpisode(self):
if self.source_path.endswith("unknown.mkv"):
return -1
return 3
def getPattern(self):
if self.source_path.endswith("mapped.mkv"):
return _FakePattern(7)
return None
class _FakeShiftedSeasonController:
def __init__(self, context):
self.context = context
def resolveShiftSeason(self, show_id, season, episode, patternId=None):
if patternId is not None:
return 2, 1, "pattern"
return season, episode, "default"
class InspectShiftCliTests(unittest.TestCase):
def setUp(self):
self.tempdir = tempfile.TemporaryDirectory()
self.home_dir = Path(self.tempdir.name) / "home"
self.home_dir.mkdir()
self.database_path = Path(self.tempdir.name) / "test.db"
self.source_dir = Path(self.tempdir.name) / "source"
self.source_dir.mkdir()
self.mapped_path = self.source_dir / "mapped.mkv"
self.mapped_path.write_bytes(b"mapped")
self.identity_path = self.source_dir / "identity.mkv"
self.identity_path.write_bytes(b"identity")
self.unknown_path = self.source_dir / "unknown.mkv"
self.unknown_path.write_bytes(b"unknown")
def tearDown(self):
self.tempdir.cleanup()
def test_inspect_shift_prints_resolved_mapping_for_each_file(self):
runner = CliRunner()
with (
patch("ffx.file_properties.FileProperties", _FakeFileProperties),
patch(
"ffx.shifted_season_controller.ShiftedSeasonController",
_FakeShiftedSeasonController,
),
):
result = runner.invoke(
cli.ffx,
[
"--database-file",
str(self.database_path),
"inspect",
"--shift",
str(self.mapped_path),
str(self.identity_path),
str(self.unknown_path),
],
env={**os.environ, "HOME": str(self.home_dir)},
)
self.assertEqual(0, result.exit_code, result.output)
self.assertIn(
f"{self.mapped_path}: 1/3 -> 2/1 from pattern",
result.output,
)
self.assertIn(
f"{self.identity_path}: none",
result.output,
)
self.assertIn(
f"{self.unknown_path}: no season/episode recognized",
result.output,
)
def test_inspect_without_shift_requires_exactly_one_filename(self):
runner = CliRunner()
result = runner.invoke(
cli.ffx,
[
"--database-file",
str(self.database_path),
"inspect",
str(self.mapped_path),
str(self.unknown_path),
],
env={**os.environ, "HOME": str(self.home_dir)},
)
self.assertNotEqual(0, result.exit_code)
self.assertIn(
"Inspect without --shift requires exactly one filename.",
result.output,
)
if __name__ == "__main__":
unittest.main()

View File

@@ -168,6 +168,40 @@ class CliLazyImportTests(unittest.TestCase):
result["modules"],
)
def test_root_debug_flag_parses_without_loading_runtime_modules(self):
result = self.run_python(
textwrap.dedent(
f"""
import json
import sys
sys.path.insert(0, {str(SRC_ROOT)!r})
import ffx.cli
context = ffx.cli.ffx.make_context(
"ffx",
["--debug", "help"],
resilient_parsing=True,
)
print(json.dumps({{
"debug": context.params["debug"],
"modules": {{
module_name: module_name in sys.modules
for module_name in {HEAVY_MODULES!r}
}},
}}))
"""
)
)
self.assertTrue(result["debug"])
self.assertTrue(
all(not is_loaded for is_loaded in result["modules"].values()),
result["modules"],
)
def test_convert_cut_option_supports_flag_duration_and_start_duration_forms(self):
result = self.run_python(
textwrap.dedent(
@@ -229,6 +263,92 @@ class CliLazyImportTests(unittest.TestCase):
result["modules"],
)
def test_convert_copy_flags_parse_without_loading_runtime_modules(self):
result = self.run_python(
textwrap.dedent(
f"""
import click
import json
import sys
sys.path.insert(0, {str(SRC_ROOT)!r})
import ffx.cli
context = ffx.cli.convert.make_context(
"convert",
["--copy-video", "--copy-audio"],
resilient_parsing=True,
)
help_output = ffx.cli.convert.get_help(click.Context(ffx.cli.convert))
print(json.dumps({{
"copy_video": context.params["copy_video"],
"copy_audio": context.params["copy_audio"],
"output": help_output,
"modules": {{
module_name: module_name in sys.modules
for module_name in {HEAVY_MODULES!r}
}},
}}))
"""
)
)
self.assertTrue(result["copy_video"])
self.assertTrue(result["copy_audio"])
self.assertIn("--copy-video", result["output"])
self.assertIn("--copy-audio", result["output"])
self.assertTrue(
all(not is_loaded for is_loaded in result["modules"].values()),
result["modules"],
)
def test_edit_command_avoids_database_bootstrap(self):
result = self.run_python(
textwrap.dedent(
f"""
import json
import os
import sys
import tempfile
from click.testing import CliRunner
sys.path.insert(0, {str(SRC_ROOT)!r})
import ffx.cli
import ffx.ffx_app
import ffx.logging_utils
ffx.ffx_app.FfxApp.run = lambda self: None
ffx.logging_utils.configure_ffx_logger = lambda *args, **kwargs: None
runner = CliRunner()
with tempfile.TemporaryDirectory() as tmpdir:
sample_path = os.path.join(tmpdir, "sample.mkv")
with open(sample_path, "w", encoding="utf-8"):
pass
invoke_result = runner.invoke(
ffx.cli.ffx,
["--dry-run", "edit", sample_path],
)
print(json.dumps({{
"exit_code": invoke_result.exit_code,
"output": invoke_result.output,
"modules": {{
module_name: module_name in sys.modules
for module_name in {HEAVY_MODULES!r}
}},
}}))
"""
)
)
self.assertEqual(0, result["exit_code"], result["output"])
self.assertFalse(result["modules"]["ffx.database"], result["modules"])
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,89 @@
from __future__ import annotations
from pathlib import Path
import sys
import unittest
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx import cli # noqa: E402
from ffx.track_codec import TrackCodec # noqa: E402
from ffx.track_descriptor import TrackDescriptor # noqa: E402
from ffx.track_type import TrackType # noqa: E402
class UnmuxSequenceTests(unittest.TestCase):
def test_h265_video_unmux_uses_annex_b_bitstream_filter_without_forced_format(self):
track_descriptor = TrackDescriptor(
index=0,
sub_index=0,
track_type=TrackType.VIDEO,
codec_name=TrackCodec.H265,
tags={},
disposition_set=set(),
)
sequence = cli.getUnmuxSequence(
track_descriptor,
"input.mp4",
"episode_0_eng",
)
self.assertEqual(
[
"ffmpeg",
"-y",
"-i",
"input.mp4",
"-map",
"0:v:0",
"-c:v",
"copy",
"-bsf:v",
"hevc_mp4toannexb",
"episode_0_eng.h265",
],
sequence,
)
def test_non_h265_unmux_keeps_generic_copy_behavior(self):
track_descriptor = TrackDescriptor(
index=1,
sub_index=0,
track_type=TrackType.SUBTITLE,
codec_name=TrackCodec.SRT,
tags={},
disposition_set=set(),
)
sequence = cli.getUnmuxSequence(
track_descriptor,
"input.mkv",
"episode_1_eng",
)
self.assertEqual(
[
"ffmpeg",
"-y",
"-i",
"input.mkv",
"-map",
"0:s:0",
"-c",
"copy",
"-f",
"srt",
"episode_1_eng.srt",
],
sequence,
)
if __name__ == "__main__":
unittest.main()

View File

@@ -68,11 +68,14 @@ class UpgradeCommandTests(unittest.TestCase):
subprocess_calls.append((args, kwargs))
if args == ['git', 'status', '--porcelain', '--untracked-files=no']:
return self.make_completed(args, stdout="M src/ffx/constants.py\n")
if args == ['git', 'rev-parse', '--abbrev-ref', 'HEAD']:
return self.make_completed(args, stdout="main\n")
return self.make_completed(args)
with (
patch.object(cli, "getBundleRepoPath", return_value=repo_path),
patch.object(cli, "getBundlePipPath", return_value=pip_path),
patch.object(cli, "getBundleVersion", return_value="0.3.2"),
patch.object(cli.os.path, "isdir", return_value=True),
patch.object(cli.os.path, "isfile", return_value=True),
patch.object(cli.subprocess, "run", side_effect=fake_run),
@@ -81,6 +84,7 @@ class UpgradeCommandTests(unittest.TestCase):
self.assertEqual(0, result.exit_code, result.output)
self.assertIn("Tracked local changes detected in the bundle repository:", result.output)
self.assertIn("Updated FFX to version 0.3.2 from branch main.", result.output)
self.assertEqual(
[
['git', 'status', '--porcelain', '--untracked-files=no'],
@@ -89,6 +93,7 @@ class UpgradeCommandTests(unittest.TestCase):
['git', 'checkout', '-B', 'main', 'FETCH_HEAD'],
[pip_path, 'install', '--upgrade', 'pip', 'setuptools', 'wheel'],
[pip_path, 'install', '--editable', '.'],
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
],
[call[0] for call in subprocess_calls],
)
@@ -106,11 +111,14 @@ class UpgradeCommandTests(unittest.TestCase):
subprocess_calls.append((args, kwargs))
if args == ['git', 'status', '--porcelain', '--untracked-files=no']:
return self.make_completed(args, stdout="")
if args == ['git', 'rev-parse', '--abbrev-ref', 'HEAD']:
return self.make_completed(args, stdout="develop\n")
return self.make_completed(args)
with (
patch.object(cli, "getBundleRepoPath", return_value=repo_path),
patch.object(cli, "getBundlePipPath", return_value=pip_path),
patch.object(cli, "getBundleVersion", return_value="0.3.3"),
patch.object(cli.os.path, "isdir", return_value=True),
patch.object(cli.os.path, "isfile", return_value=True),
patch.object(cli.subprocess, "run", side_effect=fake_run),
@@ -118,12 +126,14 @@ class UpgradeCommandTests(unittest.TestCase):
result = runner.invoke(cli.ffx, ["upgrade"])
self.assertEqual(0, result.exit_code, result.output)
self.assertIn("Updated FFX to version 0.3.3 from branch develop.", result.output)
self.assertEqual(
[
['git', 'status', '--porcelain', '--untracked-files=no'],
['git', 'pull'],
[pip_path, 'install', '--upgrade', 'pip', 'setuptools', 'wheel'],
[pip_path, 'install', '--editable', '.'],
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
],
[call[0] for call in subprocess_calls],
)

View File

@@ -49,6 +49,9 @@ class ConfigureWorkstationScriptTests(unittest.TestCase):
"HOME": str(self.home_dir),
"PATH": f"{self.stub_bin_dir}:{os.environ.get('PATH', '')}",
"FFX_PYTHON": str(BUNDLE_PYTHON),
"LANG": "C.UTF-8",
"LC_ALL": "C.UTF-8",
"LC_MESSAGES": "C.UTF-8",
**env_overrides,
}
@@ -76,6 +79,7 @@ class ConfigureWorkstationScriptTests(unittest.TestCase):
self.assertEqual(
{
"databasePath": str(self.home_dir / ".local" / "var" / "ffx" / "ffx.db"),
"language": "de",
"logDirectory": str(self.home_dir / ".local" / "var" / "log"),
"subtitlesDirectory": str(
self.home_dir / ".local" / "var" / "sync" / "subtitles"
@@ -113,6 +117,24 @@ class ConfigureWorkstationScriptTests(unittest.TestCase):
config_data,
)
def test_script_seeds_system_language_into_default_config(self):
completed = self.run_script(
LANG="fr_FR.UTF-8",
LC_ALL="fr_FR.UTF-8",
LC_MESSAGES="fr_FR.UTF-8",
)
self.assertEqual(
0,
completed.returncode,
f"STDOUT:\n{completed.stdout}\nSTDERR:\n{completed.stderr}",
)
config_path = self.home_dir / ".local" / "etc" / "ffx.json"
config_data = json.loads(config_path.read_text(encoding="utf-8"))
self.assertEqual("fr", config_data["language"])
def test_script_honors_custom_template_override(self):
custom_template_path = Path(self.tempdir.name) / "custom-config.j2"
custom_template_path.write_text(

View File

@@ -0,0 +1,196 @@
from __future__ import annotations
from pathlib import Path
import sys
import unittest
from unittest.mock import patch
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx.diagnostics import ( # noqa: E402
FfmpegCommandRunner,
FfmpegDiagnosticMonitor,
FfmpegSkipFileWarning,
getUnremediedIssues,
iterUnremediedIssueSummaryLines,
)
class RecordingLogger:
def __init__(self):
self.messages: list[str] = []
def warning(self, message, *args, **kwargs):
if args:
message = message % args
self.messages.append(str(message))
class FfmpegDiagnosticsTests(unittest.TestCase):
def test_command_runner_retries_with_genpts_after_timestamp_warning(self):
logger = RecordingLogger()
context = {
"logger": logger,
"current_source_path": "tests/assets/avi/conan_S01E754_amalgam.avi",
}
runner = FfmpegCommandRunner(context)
commands = []
def fake_execute(commandSequence, **kwargs):
commands.append(list(commandSequence))
stderrLineHandler = kwargs["stderrLineHandler"]
if len(commands) == 1:
self.assertTrue(
stderrLineHandler(
"[matroska @ 0x1] Timestamps are unset in a packet for stream 0. "
+ "This is deprecated and will stop working in the future."
)
)
return "", "timestamp warning\n", -15
return "done", "", 0
with patch("ffx.diagnostics.monitor.executeProcess", side_effect=fake_execute):
out, err, rc = runner.execute(["ffmpeg", "-y", "-i", "input.avi", "output.mkv"])
self.assertEqual("done", out)
self.assertEqual("", err)
self.assertEqual(0, rc)
self.assertEqual(
[
["ffmpeg", "-y", "-i", "input.avi", "output.mkv"],
["ffmpeg", "-fflags", "+genpts", "-y", "-i", "input.avi", "output.mkv"],
],
commands,
)
self.assertEqual(
[
"ffmpeg reported unset packet timestamps for tests/assets/avi/conan_S01E754_amalgam.avi. "
+ "Stopping early and retrying with -fflags +genpts."
],
logger.messages,
)
self.assertEqual({}, getUnremediedIssues(context))
def test_command_runner_skips_file_when_timestamp_warning_persists_after_genpts(self):
logger = RecordingLogger()
context = {
"logger": logger,
"current_source_path": "tests/assets/avi/conan_S01E754_amalgam.avi",
}
runner = FfmpegCommandRunner(context)
def fake_execute(commandSequence, **kwargs):
stderrLineHandler = kwargs["stderrLineHandler"]
self.assertTrue(
stderrLineHandler(
"[matroska @ 0x1] Timestamps are unset in a packet for stream 0. "
+ "This is deprecated and will stop working in the future."
)
)
return "", "timestamp warning\n", -15
with patch("ffx.diagnostics.monitor.executeProcess", side_effect=fake_execute):
with self.assertRaises(FfmpegSkipFileWarning):
runner.execute(
["ffmpeg", "-fflags", "+genpts", "-y", "-i", "input.avi", "output.mkv"]
)
self.assertEqual(
[
"Skipping file tests/assets/avi/conan_S01E754_amalgam.avi: ffmpeg still reported "
+ "unset packet timestamps after retry with -fflags +genpts."
],
logger.messages,
)
self.assertEqual(
{
"tests/assets/avi/conan_S01E754_amalgam.avi": ["retry-with-generated-pts"]
},
getUnremediedIssues(context),
)
def test_monitor_tracks_non_harmless_corrupt_mpeg_audio_remedy_in_summary(self):
logger = RecordingLogger()
context = {
"logger": logger,
"current_source_path": "tests/assets/avi/conan_S01E763_amalgam.avi",
}
monitor = FfmpegDiagnosticMonitor(
context,
["ffmpeg", "-y", "-i", "input.avi", "output.mkv"],
)
self.assertFalse(
monitor.handle_stderr_line("[mp3float @ 0x1] invalid new backstep -1")
)
self.assertFalse(monitor.handle_stderr_line("[mp3float @ 0x1] invalid block type"))
self.assertFalse(
monitor.handle_stderr_line(
"[aist#0:1/mp3 @ 0x2] [dec:mp3float @ 0x3] Error submitting packet to decoder: "
+ "Invalid data found when processing input"
)
)
self.assertEqual(
[
"ffmpeg reported damaged MPEG audio frames while converting "
+ "tests/assets/avi/conan_S01E763_amalgam.avi. FFX will continue, but the "
+ "output audio may contain gaps or glitches."
],
logger.messages,
)
self.assertEqual(
{
"tests/assets/avi/conan_S01E763_amalgam.avi": ["warn-corrupt-mpeg-audio"]
},
getUnremediedIssues(context),
)
self.assertEqual(
["conan_S01E763_amalgam.avi: warn-corrupt-mpeg-audio"],
iterUnremediedIssueSummaryLines(context),
)
def test_monitor_tracks_unhandled_diagnostic_for_summary(self):
context = {
"logger": RecordingLogger(),
"current_source_path": "tests/assets/avi/example.avi",
}
monitor = FfmpegDiagnosticMonitor(
context,
["ffmpeg", "-y", "-i", "input.avi", "output.mkv"],
)
self.assertFalse(
monitor.handle_stderr_line(
"[avi @ 0x1] Strange warning with no automatic remedy is present"
)
)
self.assertEqual(
{
"tests/assets/avi/example.avi": ["unhandled-warning"]
},
getUnremediedIssues(context),
)
self.assertEqual(
["example.avi: unhandled-warning"],
iterUnremediedIssueSummaryLines(context),
)
self.assertEqual(
[
"ffmpeg reported a diagnostic with no automatic remedy while converting "
+ "tests/assets/avi/example.avi. FFX will continue, but review the output "
+ "file. First unhandled line: [avi @ 0x1] Strange warning with no automatic remedy is present"
],
context["logger"].messages,
)
if __name__ == "__main__":
unittest.main()

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
import click
from pathlib import Path
import sys
import unittest
@@ -14,6 +15,7 @@ if str(SRC_ROOT) not in sys.path:
from ffx.ffx_controller import FfxController # noqa: E402
from ffx.audio_layout import AudioLayout # noqa: E402
from ffx.logging_utils import get_ffx_logger # noqa: E402
from ffx.media_descriptor import MediaDescriptor # noqa: E402
from ffx.show_descriptor import ShowDescriptor # noqa: E402
@@ -32,6 +34,9 @@ class StaticConfig:
class FfxControllerTests(unittest.TestCase):
def tearDown(self):
FfxController.isFfmpegEncoderAvailable.cache_clear()
def make_context(self, video_encoder: VideoEncoder) -> dict:
return {
"logger": get_ffx_logger(),
@@ -39,6 +44,8 @@ class FfxControllerTests(unittest.TestCase):
"video_encoder": video_encoder,
"dry_run": False,
"perform_cut": False,
"copy_video": False,
"copy_audio": False,
"bitrates": {
"stereo": "112k",
"ac3": "256k",
@@ -71,6 +78,56 @@ class FfxControllerTests(unittest.TestCase):
)
return descriptor, source_descriptor
def make_media_descriptors_with_audio(
self,
audio_layout: AudioLayout = AudioLayout.LAYOUT_STEREO,
) -> tuple[MediaDescriptor, MediaDescriptor]:
descriptor = MediaDescriptor(
track_descriptors=[
TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.VIDEO,
codec_name=TrackCodec.H264,
),
TrackDescriptor(
index=1,
source_index=1,
sub_index=0,
track_type=TrackType.AUDIO,
codec_name=TrackCodec.AAC,
audio_layout=audio_layout,
),
]
)
source_descriptor = MediaDescriptor(
track_descriptors=[
TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.VIDEO,
codec_name=TrackCodec.H264,
),
TrackDescriptor(
index=1,
source_index=1,
sub_index=0,
track_type=TrackType.AUDIO,
codec_name=TrackCodec.AAC,
audio_layout=audio_layout,
),
]
)
return descriptor, source_descriptor
def assert_token_pair(self, command: list[str], first: str, second: str):
self.assertTrue(
any(command[index:index + 2] == [first, second] for index in range(len(command) - 1)),
command,
)
def test_vp9_run_job_emits_file_level_encoding_quality_metadata(self):
context = self.make_context(VideoEncoder.VP9)
target_descriptor, source_descriptor = self.make_media_descriptors()
@@ -192,6 +249,135 @@ class FfxControllerTests(unittest.TestCase):
self.assertIn("ENCODING_QUALITY=19", commands[0])
mocked_info.assert_any_call("Setting quality 19 from pattern")
def test_copy_video_uses_single_copy_command_without_video_encoding_options(self):
context = self.make_context(VideoEncoder.VP9)
context["copy_video"] = True
target_descriptor, source_descriptor = self.make_media_descriptors_with_audio()
controller = FfxController(context, target_descriptor, source_descriptor)
commands = []
with patch.object(
controller,
"executeCommandSequence",
side_effect=lambda command: commands.append(command) or ("", "", 0),
):
controller.runJob(
"input.mkv",
"output.mkv",
chainIteration=[
{
"identifier": "quality",
"parameters": {"quality": 27},
},
{
"identifier": "nlmeans",
"parameters": {},
"tokens": ["nlmeans=s=2.0"],
},
],
cropArguments={
"output_width": 1280,
"output_height": 720,
"x_offset": 0,
"y_offset": 0,
},
)
self.assertEqual(1, len(commands))
self.assert_token_pair(commands[0], "-c:v", "copy")
self.assertIn("libopus", commands[0])
self.assertNotIn("libvpx-vp9", commands[0])
self.assertNotIn("-pass", commands[0])
self.assertNotIn("-vf", commands[0])
self.assertFalse(any(token.startswith("ENCODING_QUALITY=") for token in commands[0]))
def test_copy_audio_uses_audio_copy_without_audio_encoding_options(self):
context = self.make_context(VideoEncoder.H264)
context["copy_audio"] = True
target_descriptor, source_descriptor = self.make_media_descriptors_with_audio(
AudioLayout.LAYOUT_5_1
)
controller = FfxController(context, target_descriptor, source_descriptor)
commands = []
with patch.object(
controller,
"executeCommandSequence",
side_effect=lambda command: commands.append(command) or ("", "", 0),
):
controller.runJob(
"input.mkv",
"output.mkv",
chainIteration=[
{
"identifier": "quality",
"parameters": {"quality": 21},
}
],
)
self.assertEqual(1, len(commands))
self.assert_token_pair(commands[0], "-c:a", "copy")
self.assertIn("libx264", commands[0])
self.assertNotIn("libopus", commands[0])
self.assertFalse(any(token.startswith("-b:a") for token in commands[0]))
self.assertFalse(any(token.startswith("-filter:a") for token in commands[0]))
def test_generate_h264_tokens_prefers_libx264_when_available(self):
context = self.make_context(VideoEncoder.H264)
target_descriptor, source_descriptor = self.make_media_descriptors()
controller = FfxController(context, target_descriptor, source_descriptor)
with patch.object(
FfxController,
"getSupportedSoftwareH264Encoder",
return_value="libx264",
):
tokens = controller.generateH264Tokens(23)
self.assertEqual(
["-c:v:0", "libx264", "-preset", "slow", "-crf", "23"],
tokens,
)
def test_generate_h264_tokens_falls_back_to_libopenh264_and_logs_warning(self):
context = self.make_context(VideoEncoder.H264)
target_descriptor, source_descriptor = self.make_media_descriptors()
controller = FfxController(context, target_descriptor, source_descriptor)
with (
patch.object(
FfxController,
"getSupportedSoftwareH264Encoder",
return_value="libopenh264",
),
patch.object(context["logger"], "warning") as mocked_warning,
):
tokens = controller.generateH264Tokens(23)
self.assertEqual(
["-c:v:0", "libopenh264", "-pix_fmt", "yuv420p"],
tokens,
)
mocked_warning.assert_called_once_with(
"libx264 encoder unavailable; falling back to libopenh264 for H.264 encoding."
)
def test_generate_h264_tokens_raises_when_no_supported_software_encoder_exists(self):
context = self.make_context(VideoEncoder.H264)
target_descriptor, source_descriptor = self.make_media_descriptors()
controller = FfxController(context, target_descriptor, source_descriptor)
with patch.object(
FfxController,
"getSupportedSoftwareH264Encoder",
return_value=None,
):
with self.assertRaisesRegex(
click.ClickException,
"no supported software H.264 encoder is available",
):
controller.generateH264Tokens(23)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,82 @@
from __future__ import annotations
from pathlib import Path
import sys
import tempfile
import unittest
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx.file_properties import FileProperties # noqa: E402
from ffx.i18n import set_current_language # noqa: E402
from ffx.logging_utils import get_ffx_logger # noqa: E402
from ffx.track_codec import TrackCodec # noqa: E402
from ffx.track_type import TrackType # noqa: E402
from tests.support.ffx_bundle import SourceTrackSpec, create_source_fixture # noqa: E402
class StaticConfig:
def __init__(self, data: dict):
self._data = data
def getData(self):
return self._data
class FilePropertiesAssetProbeTests(unittest.TestCase):
def tearDown(self):
set_current_language("de")
def test_boruto_webm_probe_recognizes_webm_stream_codecs(self):
context = {
"logger": get_ffx_logger(),
"config": StaticConfig({}),
"language": "de",
"use_pattern": False,
}
set_current_language("de")
with tempfile.TemporaryDirectory() as tmpdir:
media_path = create_source_fixture(
Path(tmpdir),
"fixture.webm",
[
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
SourceTrackSpec(TrackType.AUDIO, identity="audio-1", language="eng"),
SourceTrackSpec(
TrackType.SUBTITLE,
identity="subtitle-2",
language="eng",
subtitle_lines=("Lorem ipsum dolor sit amet.",),
),
],
duration_seconds=3,
video_encoder="libvpx-vp9",
video_encoder_options=("-b:v", "0", "-crf", "45"),
audio_encoder="libopus",
audio_encoder_options=("-b:a", "48k"),
subtitle_encoder="webvtt",
)
file_properties = FileProperties(context, str(media_path))
tracks = file_properties.getMediaDescriptor().getTrackDescriptors()
subtitle_codecs = [
track.getCodec()
for track in tracks
if track.getType() == TrackType.SUBTITLE
]
self.assertIn(TrackCodec.VP9, [track.getCodec() for track in tracks])
self.assertIn(TrackCodec.OPUS, [track.getCodec() for track in tracks])
self.assertTrue(subtitle_codecs)
self.assertTrue(all(codec == TrackCodec.WEBVTT for codec in subtitle_codecs))
if __name__ == "__main__":
unittest.main()

View File

@@ -107,6 +107,22 @@ class FilePropertiesProbeTests(unittest.TestCase):
+ ["/tmp/example_s01e01.mkv"]
)
def test_use_pattern_false_skips_pattern_controller_construction(self):
file_properties_module = self.import_module()
with patch.object(
file_properties_module,
"PatternController",
side_effect=AssertionError("PatternController should not be created"),
):
file_properties = file_properties_module.FileProperties(
self.make_context(),
"/tmp/example_s01e01.mkv",
)
self.assertEqual(-1, file_properties.getShowId())
self.assertIsNone(file_properties.getPattern())
def test_cropdetect_uses_configured_window_and_caches_results(self):
file_properties_module = self.import_module()
file_properties_module.FileProperties._clear_cropdetect_cache()

89
tests/unit/test_i18n.py Normal file
View File

@@ -0,0 +1,89 @@
from __future__ import annotations
from pathlib import Path
import json
import sys
import tempfile
import unittest
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx.i18n import ( # noqa: E402
detect_system_language,
read_configured_language,
resolve_application_language,
set_current_language,
t,
)
from ffx.iso_language import IsoLanguage # noqa: E402
class I18nTests(unittest.TestCase):
def tearDown(self):
set_current_language("de")
def test_cli_language_takes_precedence_over_config_and_system(self):
self.assertEqual(
"es",
resolve_application_language(
cli_language="es",
config_language="fr",
system_language="ja",
),
)
def test_config_language_takes_precedence_over_system(self):
self.assertEqual(
"fr",
resolve_application_language(
config_language="fr",
system_language="ja",
),
)
def test_system_language_is_used_when_no_cli_or_config_is_present(self):
self.assertEqual("ja", resolve_application_language(system_language="ja"))
def test_german_is_default_when_no_supported_language_is_available(self):
self.assertEqual(
"de",
resolve_application_language(
env={
"LANG": "C.UTF-8",
"LC_ALL": "C.UTF-8",
"LC_MESSAGES": "C.UTF-8",
}
),
)
def test_system_language_detection_normalizes_norwegian_bokmal(self):
self.assertEqual(
"nb",
detect_system_language({"LANG": "nb_NO.UTF-8"}),
)
def test_read_configured_language_normalizes_language_code(self):
with tempfile.TemporaryDirectory() as tempdir:
config_path = Path(tempdir) / "ffx.json"
config_path.write_text(
json.dumps({"language": "pt_BR.UTF-8"}),
encoding="utf-8",
)
self.assertEqual("pt", read_configured_language(config_path))
def test_phrase_translation_uses_catalog_for_selected_language(self):
set_current_language("fr")
self.assertEqual("Ajouter", t("Add"))
def test_iso_language_labels_use_catalog_for_selected_language(self):
set_current_language("de")
self.assertEqual("Deutsch", IsoLanguage.GERMAN.label())
if __name__ == "__main__":
unittest.main()

View File

@@ -16,8 +16,10 @@ if str(SRC_ROOT) not in sys.path:
from ffx.logging_utils import ( # noqa: E402
CONSOLE_HANDLER_NAME,
FILE_HANDLER_NAME,
MUTED_CONSOLE_LEVEL,
configure_ffx_logger,
get_ffx_logger,
set_ffx_console_logging_enabled,
)
@@ -81,6 +83,33 @@ class LoggingUtilsTests(unittest.TestCase):
self.cleanup_logger(logger_name)
def test_set_ffx_console_logging_enabled_mutes_and_restores_console_handler(self):
logger_name = "ffx-test-console-mute"
self.cleanup_logger(logger_name)
with tempfile.TemporaryDirectory() as tempdir:
log_path = Path(tempdir) / "ffx.log"
logger = configure_ffx_logger(
str(log_path),
logging.DEBUG,
logging.INFO,
name=logger_name,
)
console_handler = next(
handler for handler in logger.handlers if handler.get_name() == CONSOLE_HANDLER_NAME
)
self.assertEqual(logging.INFO, console_handler.level)
set_ffx_console_logging_enabled(logger, enabled=False)
self.assertEqual(MUTED_CONSOLE_LEVEL, console_handler.level)
set_ffx_console_logging_enabled(logger, enabled=True)
self.assertEqual(logging.INFO, console_handler.level)
self.cleanup_logger(logger_name)
if __name__ == "__main__":
unittest.main()

View File

@@ -15,6 +15,7 @@ from ffx.media_descriptor import MediaDescriptor # noqa: E402
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet # noqa: E402
from ffx.track_descriptor import TrackDescriptor # noqa: E402
from ffx.track_type import TrackType # noqa: E402
from ffx.i18n import set_current_language # noqa: E402
from ffx.logging_utils import get_ffx_logger # noqa: E402
@@ -27,6 +28,9 @@ class StaticConfig:
class MediaDescriptorChangeSetTests(unittest.TestCase):
def tearDown(self):
set_current_language("de")
def test_non_primary_source_language_code_is_normalized_in_changed_track_metadata(self):
context = {
"logger": get_ffx_logger(),
@@ -171,6 +175,179 @@ class MediaDescriptorChangeSetTests(unittest.TestCase):
self.assertIn("language=deu", metadata_tokens)
self.assertNotIn("language=ger", metadata_tokens)
def test_subtitle_without_title_gets_language_name_when_normalization_enabled(self):
set_current_language("de")
context = {
"logger": get_ffx_logger(),
"config": StaticConfig({}),
}
source_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.SUBTITLE,
tags={"language": "ger"},
)
target_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.SUBTITLE,
tags={"language": "ger"},
)
change_set = MediaDescriptorChangeSet(
context,
MediaDescriptor(track_descriptors=[target_track]),
MediaDescriptor(track_descriptors=[source_track]),
)
metadata_tokens = change_set.generateMetadataTokens()
change_set_obj = change_set.getChangeSetObj()
self.assertIn("-metadata:s:s:0", metadata_tokens)
self.assertIn("language=deu", metadata_tokens)
self.assertIn("title=Deutsch", metadata_tokens)
self.assertEqual(
"Deutsch",
change_set_obj["tracks"]["changed"][0]["tags"]["added"]["title"],
)
def test_subtitle_without_title_uses_current_language_for_generated_title(self):
set_current_language("en")
context = {
"logger": get_ffx_logger(),
"config": StaticConfig({}),
}
source_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.SUBTITLE,
tags={"language": "ger"},
)
target_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.SUBTITLE,
tags={"language": "ger"},
)
change_set = MediaDescriptorChangeSet(
context,
MediaDescriptor(track_descriptors=[target_track]),
MediaDescriptor(track_descriptors=[source_track]),
)
metadata_tokens = change_set.generateMetadataTokens()
self.assertIn("title=German", metadata_tokens)
self.assertNotIn("title=Deutsch", metadata_tokens)
def test_audio_track_without_title_gets_language_name_when_normalization_enabled(self):
set_current_language("de")
context = {
"logger": get_ffx_logger(),
"config": StaticConfig({}),
}
source_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.AUDIO,
tags={"language": "ger"},
)
target_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.AUDIO,
tags={"language": "ger"},
)
change_set = MediaDescriptorChangeSet(
context,
MediaDescriptor(track_descriptors=[target_track]),
MediaDescriptor(track_descriptors=[source_track]),
)
metadata_tokens = change_set.generateMetadataTokens()
self.assertIn("-metadata:s:a:0", metadata_tokens)
self.assertIn("language=deu", metadata_tokens)
self.assertIn("title=Deutsch", metadata_tokens)
def test_video_track_without_title_gets_language_name_when_normalization_enabled(self):
set_current_language("de")
context = {
"logger": get_ffx_logger(),
"config": StaticConfig({}),
}
source_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.VIDEO,
tags={"language": "ger"},
)
target_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.VIDEO,
tags={"language": "ger"},
)
change_set = MediaDescriptorChangeSet(
context,
MediaDescriptor(track_descriptors=[target_track]),
MediaDescriptor(track_descriptors=[source_track]),
)
metadata_tokens = change_set.generateMetadataTokens()
self.assertIn("language=deu", metadata_tokens)
self.assertIn("title=Deutsch", metadata_tokens)
def test_changed_track_language_does_not_autofill_title_when_title_already_exists(self):
set_current_language("de")
context = {
"logger": get_ffx_logger(),
"config": StaticConfig({}),
}
source_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.SUBTITLE,
tags={"language": "ger", "title": "Deutsch [FN]"},
)
target_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.SUBTITLE,
tags={"language": "jpn", "title": "Deutsch [FN]"},
)
change_set = MediaDescriptorChangeSet(
context,
MediaDescriptor(track_descriptors=[target_track]),
MediaDescriptor(track_descriptors=[source_track]),
)
metadata_tokens = change_set.generateMetadataTokens()
self.assertIn("language=jpn", metadata_tokens)
self.assertNotIn("title=Japanisch", metadata_tokens)
self.assertNotIn("title=Deutsch", metadata_tokens)
def test_target_only_tracks_still_emit_remove_tokens_for_configured_stream_keys(self):
context = {
"logger": get_ffx_logger(),
@@ -212,6 +389,79 @@ class MediaDescriptorChangeSetTests(unittest.TestCase):
self.assertIn("BPS=", metadata_tokens)
self.assertIn("KEEP_ME=keep-me", metadata_tokens)
def test_cleanup_can_be_disabled_per_context(self):
context = {
"logger": get_ffx_logger(),
"config": StaticConfig(
{
"metadata": {
"remove": ["creation_time"],
"streams": {
"remove": ["BPS"],
},
}
}
),
"apply_metadata_cleanup": False,
}
source_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.AUDIO,
tags={"BPS": "keep-me"},
)
target_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.AUDIO,
tags={"BPS": "keep-me"},
)
change_set = MediaDescriptorChangeSet(
context,
MediaDescriptor(
tags={"creation_time": "keep-me"},
track_descriptors=[target_track],
),
MediaDescriptor(
tags={"creation_time": "keep-me"},
track_descriptors=[source_track],
),
)
metadata_tokens = change_set.generateMetadataTokens()
self.assertNotIn("creation_time=", metadata_tokens)
self.assertNotIn("BPS=", metadata_tokens)
def test_normalization_can_be_disabled_per_context(self):
context = {
"logger": get_ffx_logger(),
"config": StaticConfig({}),
"apply_metadata_normalization": False,
}
target_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.AUDIO,
tags={"language": "ger", "title": "German Main"},
)
change_set = MediaDescriptorChangeSet(
context,
MediaDescriptor(track_descriptors=[target_track]),
)
metadata_tokens = change_set.generateMetadataTokens()
self.assertIn("-metadata:s:a:0", metadata_tokens)
self.assertIn("language=ger", metadata_tokens)
self.assertNotIn("language=deu", metadata_tokens)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,240 @@
from __future__ import annotations
from pathlib import Path
import os
import sys
import tempfile
import unittest
from unittest.mock import patch
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx.logging_utils import get_ffx_logger # noqa: E402
from ffx.helper import LogLevel # noqa: E402
from ffx.media_descriptor import MediaDescriptor # noqa: E402
from ffx.metadata_editor import ( # noqa: E402
apply_metadata_edits,
build_metadata_edit_command,
build_metadata_edit_context,
create_temporary_output_path,
)
from ffx.track_codec import TrackCodec # noqa: E402
from ffx.track_descriptor import TrackDescriptor # noqa: E402
from ffx.track_type import TrackType # noqa: E402
from ffx.video_encoder import VideoEncoder # noqa: E402
class StaticConfig:
def getData(self):
return {}
class NotificationCollector:
def __init__(self) -> None:
self.messages: list[str] = []
self.levels: list[LogLevel | None] = []
def __call__(self, message: str, level: LogLevel | None = None) -> None:
self.messages.append(message)
self.levels.append(level)
def make_context(*, dry_run: bool = False) -> dict:
return {
"logger": get_ffx_logger(),
"config": StaticConfig(),
"dry_run": dry_run,
"apply_metadata_cleanup": True,
"apply_metadata_normalization": True,
}
def make_descriptor() -> MediaDescriptor:
return MediaDescriptor(
track_descriptors=[
TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.VIDEO,
codec_name=TrackCodec.H264,
tags={"title": "Main"},
)
],
tags={"TITLE": "Demo"},
)
class MetadataEditorTests(unittest.TestCase):
def test_build_metadata_edit_context_forces_copy_without_signature(self):
context = build_metadata_edit_context(make_context())
self.assertEqual(VideoEncoder.COPY, context["video_encoder"])
self.assertFalse(context["perform_cut"])
self.assertTrue(context["no_signature"])
self.assertEqual({}, context["encoding_metadata_tags"])
self.assertTrue(context["apply_metadata_cleanup"])
self.assertTrue(context["apply_metadata_normalization"])
def test_create_temporary_output_path_uses_same_directory_and_extension(self):
with tempfile.TemporaryDirectory() as tmpdir:
source_path = os.path.join(tmpdir, "episode.mkv")
temporary_path = create_temporary_output_path(source_path)
self.assertEqual(".mkv", Path(temporary_path).suffix)
self.assertEqual(Path(source_path).parent, Path(temporary_path).parent)
def test_build_metadata_edit_command_maps_all_streams_and_uses_single_copy_codec(self):
context = build_metadata_edit_context(make_context())
baseline_descriptor = make_descriptor()
draft_descriptor = baseline_descriptor.clone(context=context)
command = build_metadata_edit_command(
context,
"/tmp/example.mkv",
"/tmp/.edit.mkv",
baseline_descriptor,
draft_descriptor,
)
self.assertEqual(1, command.count("-map"))
self.assertEqual(1, command.count("-c"))
self.assertNotIn("-c:v:0", command)
self.assertNotIn("-c:a:0", command)
self.assertNotIn("-c:s:0", command)
self.assertEqual(
["-map", "0", "-c", "copy"],
command[command.index("-map"):command.index("-c") + 2],
)
def test_apply_metadata_edits_rewrites_via_temporary_file_then_replaces_source(self):
context = make_context()
baseline_descriptor = make_descriptor()
draft_descriptor = baseline_descriptor.clone(context=context)
source_path = "/tmp/example.mkv"
expected_command = build_metadata_edit_command(
build_metadata_edit_context(context),
source_path,
"/tmp/.edit.mkv",
baseline_descriptor,
draft_descriptor,
)
with (
patch("ffx.metadata_editor.create_temporary_output_path", return_value="/tmp/.edit.mkv"),
patch("ffx.metadata_editor.executeProcess", return_value=("", "", 0)) as mocked_execute,
patch("ffx.metadata_editor.os.replace") as mocked_replace,
):
result = apply_metadata_edits(
context,
source_path,
baseline_descriptor,
draft_descriptor,
)
mocked_execute.assert_called_once_with(expected_command, context=build_metadata_edit_context(context))
mocked_replace.assert_called_once_with("/tmp/.edit.mkv", source_path)
self.assertEqual(
{
"applied": True,
"dry_run": False,
"target_path": source_path,
"command_sequence": expected_command,
},
{
"applied": result["applied"],
"dry_run": result["dry_run"],
"target_path": result["target_path"],
"command_sequence": result["command_sequence"],
},
)
self.assertIn("timings", result)
self.assertIn("ffmpeg_seconds", result["timings"])
self.assertIn("replace_seconds", result["timings"])
self.assertIn("write_seconds", result["timings"])
def test_apply_metadata_edits_dry_run_skips_replace_and_cleans_temp_path(self):
context = make_context(dry_run=True)
baseline_descriptor = make_descriptor()
draft_descriptor = baseline_descriptor.clone(context=context)
notifications = NotificationCollector()
expected_command = build_metadata_edit_command(
build_metadata_edit_context(context),
"/tmp/example.mkv",
"/tmp/.edit.mkv",
baseline_descriptor,
draft_descriptor,
)
with (
patch("ffx.metadata_editor.create_temporary_output_path", return_value="/tmp/.edit.mkv"),
patch("ffx.metadata_editor.executeProcess") as mocked_execute,
patch("ffx.metadata_editor.os.replace") as mocked_replace,
):
result = apply_metadata_edits(
context,
"/tmp/example.mkv",
baseline_descriptor,
draft_descriptor,
loggingHandler = notifications,
)
mocked_execute.assert_not_called()
mocked_replace.assert_not_called()
self.assertEqual(["ffmpeg dry-run prepared."], notifications.messages)
self.assertEqual([None], notifications.levels)
self.assertEqual(
{
"applied": False,
"dry_run": True,
"target_path": "/tmp/.edit.mkv",
"command_sequence": expected_command,
},
{
"applied": result["applied"],
"dry_run": result["dry_run"],
"target_path": result["target_path"],
"command_sequence": result["command_sequence"],
},
)
self.assertEqual(
{
"ffmpeg_seconds": 0.0,
"replace_seconds": 0.0,
"write_seconds": 0.0,
},
result["timings"],
)
def test_apply_metadata_edits_notifies_with_command_when_verbose(self):
context = make_context()
context["verbosity"] = 1
baseline_descriptor = make_descriptor()
draft_descriptor = baseline_descriptor.clone(context=context)
notifications = NotificationCollector()
with (
patch("ffx.metadata_editor.create_temporary_output_path", return_value="/tmp/.edit.mkv"),
patch("ffx.metadata_editor.executeProcess", return_value=("", "", 0)),
patch("ffx.metadata_editor.os.replace"),
):
apply_metadata_edits(
context,
"/tmp/example.mkv",
baseline_descriptor,
draft_descriptor,
loggingHandler = notifications,
)
self.assertEqual(1, len(notifications.messages))
self.assertTrue(notifications.messages[0].startswith("ffmpeg: ffmpeg "))
self.assertEqual([LogLevel.DEBUG], notifications.levels)
if __name__ == "__main__":
unittest.main()

View File

@@ -2,6 +2,7 @@ from __future__ import annotations
from pathlib import Path
import sys
import time
import unittest
from unittest.mock import patch
@@ -51,6 +52,33 @@ class ProcessTests(unittest.TestCase):
self.assertIn("Command timed out", err)
self.assertIn(sys.executable, err)
def test_execute_process_can_stop_early_while_streaming_stderr(self):
start = time.monotonic()
observed_lines = []
out, err, rc = executeProcess(
[
sys.executable,
"-c",
(
"import sys, time; "
"sys.stderr.write('fatal warning\\n'); sys.stderr.flush(); "
"time.sleep(2); "
"sys.stderr.write('late line\\n'); sys.stderr.flush()"
),
],
stderrLineHandler=lambda line: observed_lines.append(line) or ("fatal warning" in line),
)
elapsed = time.monotonic() - start
self.assertLess(elapsed, 1.5)
self.assertNotEqual(0, rc)
self.assertEqual("", out)
self.assertIn("fatal warning", err)
self.assertNotIn("late line", err)
self.assertEqual(["fatal warning\n"], observed_lines)
def test_get_wrapped_command_sequence_leaves_command_unwrapped_when_limits_disabled(self):
wrapped = getWrappedCommandSequence(
["ffmpeg", "-i", "input.mkv"],

View File

@@ -1,6 +1,7 @@
from __future__ import annotations
from pathlib import Path
import logging
import sys
import unittest
from unittest.mock import patch
@@ -13,6 +14,7 @@ if str(SRC_ROOT) not in sys.path:
from ffx import screen_support # noqa: E402
from ffx.i18n import set_current_language, t # noqa: E402
class StaticConfig:
@@ -23,7 +25,72 @@ class StaticConfig:
return self._data
class FakeTagTable:
def __init__(self):
self.rows = {}
self._next_index = 0
def clear(self):
self.rows.clear()
def add_row(self, *values):
row_key = f"row-{self._next_index}"
self._next_index += 1
self.rows[row_key] = tuple(values)
return row_key
class FakeApp:
def __init__(self, screen_stack):
self.screen_stack = list(screen_stack)
self.pop_called = False
self.exit_called = False
def pop_screen(self):
self.pop_called = True
def exit(self):
self.exit_called = True
class FakeScreen:
def __init__(self, screen_stack):
self.app = FakeApp(screen_stack)
class FakeRichLog:
def __init__(self):
self.messages = []
def write(self, message):
self.messages.append(message)
class FakeScreenWithLog:
def __init__(self):
self.log_view = FakeRichLog()
def query_one(self, selector, _widget_type=None):
if selector == f"#{screen_support.SCREEN_LOG_VIEW_ID}":
return self.log_view
raise LookupError(selector)
class FakeThreadedApp:
def __init__(self, screen):
self.screen = screen
self.calls = []
def call_from_thread(self, func, *args):
self.calls.append((func, args))
return func(*args)
class ScreenSupportTests(unittest.TestCase):
def tearDown(self):
set_current_language("de")
screen_support.set_screen_log_pane_enabled(False)
def make_context(self):
return {
"config": StaticConfig(
@@ -81,6 +148,113 @@ class ScreenSupportTests(unittest.TestCase):
controllers,
)
def test_populate_tag_table_keeps_raw_values_outside_display_labels(self):
table = FakeTagTable()
row_data = screen_support.populate_tag_table(
table,
{"BPS": 4835, "KEEP": "plain"},
ignore_keys=["KEEP"],
remove_keys=["BPS"],
)
self.assertEqual(
{
"row-0": ("BPS", "4835"),
"row-1": ("KEEP", "plain"),
},
row_data,
)
self.assertEqual(
("[red]BPS[/red]", "[red]4835[/red]"),
table.rows["row-0"],
)
self.assertEqual(
("[blue]KEEP[/blue]", "[blue]plain[/blue]"),
table.rows["row-1"],
)
def test_go_back_or_exit_exits_from_first_pushed_screen(self):
screen = FakeScreen(screen_stack=["base", "shows"])
screen_support.go_back_or_exit(screen)
self.assertFalse(screen.app.pop_called)
self.assertTrue(screen.app.exit_called)
def test_go_back_or_exit_pops_nested_screen(self):
screen = FakeScreen(screen_stack=["base", "shows", "details"])
screen_support.go_back_or_exit(screen)
self.assertTrue(screen.app.pop_called)
self.assertFalse(screen.app.exit_called)
def test_localized_column_width_handles_combining_character_labels(self):
set_current_language("ta")
translated = t("SubIndex")
self.assertEqual("துணைச்சுட்டி", translated)
self.assertGreater(len(translated), 8)
self.assertEqual(len(translated) + 2, screen_support.localized_column_width(translated, 8))
def test_build_screen_log_pane_is_hidden_when_debug_mode_is_disabled(self):
screen_support.set_screen_log_pane_enabled(False)
log_pane = screen_support.build_screen_log_pane()
self.assertFalse(log_pane.display)
def test_build_screen_log_pane_is_collapsed_when_debug_mode_is_enabled(self):
screen_support.set_screen_log_pane_enabled(True)
log_pane = screen_support.build_screen_log_pane()
self.assertIsInstance(log_pane, screen_support.ResizableScreenLogPane)
self.assertEqual(screen_support.SCREEN_LOG_PANE_ID, log_pane.id)
self.assertTrue(log_pane.collapsed)
def test_resizable_screen_log_pane_clamps_height_to_minimum(self):
log_pane = screen_support.ResizableScreenLogPane()
log_pane.set_log_height(1)
self.assertEqual(screen_support.SCREEN_LOG_MIN_HEIGHT, log_pane.get_log_height())
def test_configure_screen_log_handler_routes_logger_messages_to_active_screen(self):
logger_name = "ffx-test-screen-log-handler"
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG)
logger.propagate = False
for handler in list(logger.handlers):
logger.removeHandler(handler)
handler.close()
screen = FakeScreenWithLog()
app = FakeThreadedApp(screen)
try:
handler = screen_support.configure_screen_log_handler(
logger,
app,
enabled=True,
)
self.assertIsNotNone(handler)
logger.info("hello pane")
self.assertEqual(1, len(screen.log_view.messages))
self.assertRegex(
screen.log_view.messages[0],
r"^ffx-test-screen-log-handler\s+INFO\s+\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} \| hello pane$",
)
finally:
screen_support.configure_screen_log_handler(logger, app, enabled=False)
for handler in list(logger.handlers):
logger.removeHandler(handler)
handler.close()
if __name__ == "__main__":
unittest.main()

View File

@@ -183,9 +183,7 @@ class ShiftedSeasonControllerTests(unittest.TestCase):
)
self.assertEqual((1, 3), (shifted_season, shifted_episode))
mocked_info.assert_called_once_with(
"Setting season shift 1/3 -> 1/3 from pattern"
)
mocked_info.assert_not_called()
def test_shift_season_falls_back_to_identity_when_no_rule_matches(self):
pattern_id = self.add_pattern(1, r"^demo_(s[0-9]+e[0-9]+)\.mkv$")
@@ -199,9 +197,7 @@ class ShiftedSeasonControllerTests(unittest.TestCase):
)
self.assertEqual((4, 20), (shifted_season, shifted_episode))
mocked_info.assert_called_once_with(
"Setting season shift 4/20 -> 4/20 from default"
)
mocked_info.assert_not_called()
if __name__ == "__main__":

View File

@@ -0,0 +1,917 @@
from __future__ import annotations
from pathlib import Path
import sys
import unittest
from textual.widgets import Select
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx.audio_layout import AudioLayout # noqa: E402
from ffx.attachment_format import AttachmentFormat # noqa: E402
from ffx.helper import DIFF_ADDED_KEY # noqa: E402
from ffx.iso_language import IsoLanguage # noqa: E402
from ffx.logging_utils import get_ffx_logger # noqa: E402
from ffx.inspect_details_screen import InspectDetailsScreen # noqa: E402
from ffx.i18n import set_current_language # noqa: E402
from ffx.media_descriptor import MediaDescriptor # noqa: E402
from ffx.media_edit_screen import MediaEditScreen # noqa: E402
from ffx.pattern_details_screen import PatternDetailsScreen # noqa: E402
from ffx.show_descriptor import ShowDescriptor # noqa: E402
from ffx.show_details_screen import ShowDetailsScreen # noqa: E402
from ffx.shows_screen import ShowsScreen # noqa: E402
from ffx.track_codec import TrackCodec # noqa: E402
from ffx.track_descriptor import TrackDescriptor # noqa: E402
from ffx.track_details_screen import TrackDetailsScreen # noqa: E402
from ffx.track_type import TrackType # noqa: E402
class FakeTagTable:
def __init__(self):
self.rows = {}
self.columns = []
self.cursor_coordinate = (0, 0)
self._selected_row_key = None
self._next_index = 0
self._row_order = []
def clear(self, columns=False):
self.rows.clear()
self._selected_row_key = None
self._row_order.clear()
if columns:
self.columns.clear()
def add_column(self, label, *, width=None, key=None, default=None):
column_key = key if key is not None else len(self.columns)
self.columns.append(
{
"key": column_key,
"label": label,
"width": width,
"default": default,
}
)
return column_key
def add_row(self, *values):
row_key = f"row-{self._next_index}"
self._next_index += 1
self.rows[row_key] = tuple(values)
self._row_order.append(row_key)
if self._selected_row_key is None:
self._selected_row_key = row_key
return row_key
def coordinate_to_cell_key(self, _coordinate):
return self._selected_row_key, None
def select_row(self, row_key):
self._selected_row_key = row_key
def get_row_index(self, row_key):
return self._row_order.index(row_key)
def remove_row(self, row_key):
self.rows.pop(row_key, None)
if row_key in self._row_order:
self._row_order.remove(row_key)
if self._selected_row_key == row_key:
self._selected_row_key = self._row_order[0] if self._row_order else None
def update_cell(self, row_key, column_key, value):
row = list(self.rows[row_key])
row[int(column_key)] = value
self.rows[row_key] = tuple(row)
class FakeMediaDescriptor:
def __init__(self, track_descriptors, tags=None):
self._track_descriptors = list(track_descriptors)
self._tags = dict(tags or {})
def getTrackDescriptors(self):
return list(self._track_descriptors)
def getTags(self):
return dict(self._tags)
class FakeValueWidget:
def __init__(self, value):
self.value = value
self.disabled = False
class FakeInputWidget:
def __init__(self, value):
self.value = value
class FakeStaticWidget:
def __init__(self, value=""):
self.value = value
def update(self, value):
self.value = value
class FakeSelectionListWidget:
def __init__(self, selected):
self.selected = selected
def add_option(self, _option):
return None
def make_track_descriptor(index, sub_index, track_type):
return TrackDescriptor(
index=index,
sub_index=sub_index,
track_type=track_type,
codec_name=TrackCodec.UNKNOWN,
audio_layout=AudioLayout.LAYOUT_UNDEFINED,
)
def make_show_descriptor(show_id, name="Show", year=2000):
return ShowDescriptor(
id=show_id,
name=name,
year=year,
)
class TagTableScreenStateTests(unittest.TestCase):
def tearDown(self):
set_current_language("de")
def test_track_details_screen_reads_selected_tag_from_raw_row_mapping(self):
screen = object.__new__(TrackDetailsScreen)
screen.trackTagsTable = FakeTagTable()
screen._TrackDetailsScreen__draftTrackTags = {
"BPS": "4835",
"KEEP_ME": "plain",
}
screen._TrackDetailsScreen__ignoreTrackKeys = ["KEEP_ME"]
screen._TrackDetailsScreen__removeTrackKeys = ["BPS"]
screen._TrackDetailsScreen__tagRowData = {}
screen.updateTags()
self.assertEqual(
("[red]BPS[/red]", "[red]4835[/red]"),
screen.trackTagsTable.rows["row-0"],
)
self.assertEqual(
("BPS", "4835"),
screen.getSelectedTag(),
)
def test_track_details_screen_reads_select_values_from_widget_state(self):
screen = object.__new__(TrackDetailsScreen)
screen.context = {"logger": get_ffx_logger()}
screen._TrackDetailsScreen__trackDescriptor = None
screen._TrackDetailsScreen__patternId = 5
screen._TrackDetailsScreen__index = 2
screen._TrackDetailsScreen__subIndex = 0
screen._TrackDetailsScreen__trackCodec = TrackCodec.UNKNOWN
screen._TrackDetailsScreen__draftTrackTags = {"KEEP": "value"}
widgets = {
"#type_select": FakeValueWidget(TrackType.AUDIO),
"#audio_layout_select": FakeValueWidget(AudioLayout.LAYOUT_STEREO),
"#language_select": FakeValueWidget(IsoLanguage.GERMAN),
"#title_input": FakeInputWidget("German Audio"),
"#dispositions_selection_list": FakeSelectionListWidget({0, 6}),
}
screen.query_one = lambda selector, _widget_type=None: widgets[selector]
descriptor = screen.getTrackDescriptorFromInput()
self.assertEqual(TrackType.AUDIO, descriptor.getType())
self.assertEqual(AudioLayout.LAYOUT_STEREO, descriptor.getAudioLayout())
self.assertEqual("deu", descriptor.getTags()["language"])
self.assertEqual("German Audio", descriptor.getTitle())
self.assertEqual("value", descriptor.getTags()["KEEP"])
def test_track_details_screen_preserves_attachment_format_for_attachment_tracks(self):
screen = object.__new__(TrackDetailsScreen)
screen.context = {"logger": get_ffx_logger()}
screen._TrackDetailsScreen__trackDescriptor = None
screen._TrackDetailsScreen__patternId = 5
screen._TrackDetailsScreen__index = 4
screen._TrackDetailsScreen__subIndex = 0
screen._TrackDetailsScreen__trackCodec = TrackCodec.UNKNOWN
screen._TrackDetailsScreen__attachmentFormat = AttachmentFormat.TTF
screen._TrackDetailsScreen__draftTrackTags = {"filename": "font.ttf", "mimetype": "font/ttf"}
widgets = {
"#type_select": FakeValueWidget(TrackType.ATTACHMENT),
"#audio_layout_select": FakeValueWidget(AudioLayout.LAYOUT_UNDEFINED),
"#language_select": FakeValueWidget(Select.NULL),
"#title_input": FakeInputWidget(""),
"#dispositions_selection_list": FakeSelectionListWidget(set()),
}
screen.query_one = lambda selector, _widget_type=None: widgets[selector]
descriptor = screen.getTrackDescriptorFromInput()
self.assertEqual(TrackType.ATTACHMENT, descriptor.getType())
self.assertEqual(AttachmentFormat.TTF, descriptor.getAttachmentFormat())
self.assertEqual(TrackCodec.UNKNOWN, descriptor.getCodec())
def test_track_details_screen_auto_sets_localized_title_from_selected_language(self):
set_current_language("de")
screen = object.__new__(TrackDetailsScreen)
screen._TrackDetailsScreen__titleAutoManaged = True
screen._TrackDetailsScreen__suppressTitleChanged = False
screen._TrackDetailsScreen__lastAutoTitle = ""
widgets = {
"#language_select": FakeValueWidget(IsoLanguage.UNDEFINED),
"#title_input": FakeInputWidget(""),
}
screen.query_one = lambda selector, _widget_type=None: widgets[selector]
widgets["#language_select"].value = IsoLanguage.GERMAN
screen._handle_language_selection_changed(IsoLanguage.GERMAN)
self.assertEqual("Deutsch", widgets["#title_input"].value)
widgets["#language_select"].value = IsoLanguage.ENGLISH
screen._handle_language_selection_changed(IsoLanguage.ENGLISH)
self.assertEqual("Englisch", widgets["#title_input"].value)
def test_track_details_screen_auto_title_stops_after_manual_title_change(self):
set_current_language("de")
screen = object.__new__(TrackDetailsScreen)
screen._TrackDetailsScreen__titleAutoManaged = True
screen._TrackDetailsScreen__suppressTitleChanged = False
screen._TrackDetailsScreen__lastAutoTitle = ""
widgets = {
"#language_select": FakeValueWidget(IsoLanguage.GERMAN),
"#title_input": FakeInputWidget(""),
}
screen.query_one = lambda selector, _widget_type=None: widgets[selector]
screen._handle_language_selection_changed(IsoLanguage.GERMAN)
self.assertEqual("Deutsch", widgets["#title_input"].value)
widgets["#title_input"].value = "Eigener Titel"
screen._handle_title_input_changed("Eigener Titel")
widgets["#language_select"].value = IsoLanguage.ENGLISH
screen._handle_language_selection_changed(IsoLanguage.ENGLISH)
self.assertEqual("Eigener Titel", widgets["#title_input"].value)
def test_track_details_screen_does_not_auto_set_title_when_helper_is_inactive(self):
set_current_language("de")
screen = object.__new__(TrackDetailsScreen)
screen._TrackDetailsScreen__titleAutoManaged = False
screen._TrackDetailsScreen__suppressTitleChanged = False
screen._TrackDetailsScreen__lastAutoTitle = ""
widgets = {
"#language_select": FakeValueWidget(IsoLanguage.UNDEFINED),
"#title_input": FakeInputWidget("Preset"),
}
screen.query_one = lambda selector, _widget_type=None: widgets[selector]
widgets["#language_select"].value = IsoLanguage.GERMAN
screen._handle_language_selection_changed(IsoLanguage.GERMAN)
self.assertEqual("Preset", widgets["#title_input"].value)
def test_track_details_screen_metadata_only_mount_shows_normalized_title_preview(self):
set_current_language("de")
screen = object.__new__(TrackDetailsScreen)
screen._TrackDetailsScreen__index = 2
screen._TrackDetailsScreen__subIndex = 0
screen._TrackDetailsScreen__patternLabel = "demo"
screen._TrackDetailsScreen__trackType = TrackType.AUDIO
screen._TrackDetailsScreen__audioLayout = AudioLayout.LAYOUT_STEREO
screen._TrackDetailsScreen__trackDescriptor = TrackDescriptor(
index=2,
source_index=2,
sub_index=0,
track_type=TrackType.AUDIO,
codec_name=TrackCodec.DTS,
audio_layout=AudioLayout.LAYOUT_STEREO,
tags={"language": "ger"},
)
screen._TrackDetailsScreen__metadataOnly = True
screen._TrackDetailsScreen__titleAutoManaged = True
screen._TrackDetailsScreen__suppressTitleChanged = False
screen._TrackDetailsScreen__lastAutoTitle = ""
screen._TrackDetailsScreen__removeTrackKeys = []
screen._TrackDetailsScreen__ignoreTrackKeys = []
screen._TrackDetailsScreen__draftTrackTags = {}
screen._TrackDetailsScreen__tagRowData = {}
screen.updateTags = lambda: None
widgets = {
"#index_label": FakeStaticWidget(),
"#subindex_label": FakeStaticWidget(),
"#pattern_label": FakeStaticWidget(),
"#type_select": FakeValueWidget(None),
"#audio_layout_select": FakeValueWidget(None),
"#dispositions_selection_list": FakeSelectionListWidget(set()),
"#language_select": FakeValueWidget(None),
"#title_input": FakeInputWidget(""),
}
screen.query_one = lambda selector, _widget_type=None: widgets[selector]
screen.on_mount()
self.assertEqual("Deutsch", widgets["#title_input"].value)
def test_track_details_screen_language_options_are_sorted_by_localized_label(self):
set_current_language("de")
language_options = TrackDetailsScreen.build_language_options()
labels = [label for label, _language in language_options]
languages = [_language for _label, _language in language_options]
self.assertEqual(labels, sorted(labels, key=str.casefold))
self.assertNotIn(IsoLanguage.UNDEFINED, languages)
def test_track_details_screen_uses_blank_select_value_for_undefined_language(self):
self.assertEqual(
TrackDetailsScreen.language_select_value(IsoLanguage.UNDEFINED),
Select.NULL,
)
self.assertEqual(
IsoLanguage.GERMAN,
TrackDetailsScreen.language_select_value(IsoLanguage.GERMAN),
)
def test_pattern_details_screen_reads_selected_track_from_row_mapping(self):
first_track = make_track_descriptor(0, 0, TrackType.VIDEO)
second_track = make_track_descriptor(1, 0, TrackType.SUBTITLE)
screen = object.__new__(PatternDetailsScreen)
screen.tracksTable = FakeTagTable()
screen._PatternDetailsScreen__draftTracks = [first_track, second_track]
screen._PatternDetailsScreen__pattern = None
screen._PatternDetailsScreen__trackRowData = {}
screen.updateTracks()
screen.tracksTable.select_row("row-1")
self.assertIs(second_track, screen.getSelectedTrackDescriptor())
def test_pattern_details_screen_reads_selected_tag_from_raw_row_mapping(self):
screen = object.__new__(PatternDetailsScreen)
screen.tagsTable = FakeTagTable()
screen._PatternDetailsScreen__pattern = None
screen._PatternDetailsScreen__draftTags = {
"BPS": "4835",
"TITLE": "Deutsch [FN]",
}
screen._PatternDetailsScreen__ignoreGlobalKeys = ["TITLE"]
screen._PatternDetailsScreen__removeGlobalKeys = ["BPS"]
screen._PatternDetailsScreen__tagRowData = {}
screen.updateTags()
self.assertEqual(
("[red]BPS[/red]", "[red]4835[/red]"),
screen.tagsTable.rows["row-0"],
)
self.assertEqual(
("BPS", "4835"),
screen.getSelectedTag(),
)
def test_media_edit_screen_reads_selected_track_from_row_mapping(self):
first_track = make_track_descriptor(0, 0, TrackType.VIDEO)
second_track = make_track_descriptor(1, 0, TrackType.SUBTITLE)
screen = object.__new__(MediaEditScreen)
screen.tracksTable = FakeTagTable()
screen._sourceMediaDescriptor = FakeMediaDescriptor(
[first_track, second_track]
)
screen._trackRowData = {}
screen.updateTracks()
screen.tracksTable.select_row("row-1")
self.assertIs(second_track, screen.getSelectedTrackDescriptor())
def test_media_edit_screen_update_tracks_rebuilds_columns_for_auto_width_recalculation(self):
first_track = make_track_descriptor(0, 0, TrackType.VIDEO)
first_track.getTags()["title"] = "A much longer updated title"
screen = object.__new__(MediaEditScreen)
screen.tracksTable = FakeTagTable()
screen._sourceMediaDescriptor = FakeMediaDescriptor([first_track])
screen._trackRowData = {}
screen._applyNormalization = False
screen.updateTracks()
self.assertEqual(9, len(screen.tracksTable.columns))
self.assertIn("A much longer updated title", screen.tracksTable.rows["row-0"])
def test_media_edit_screen_shows_normalized_audio_title_preview(self):
set_current_language("de")
audio_track = TrackDescriptor(
index=1,
source_index=1,
sub_index=0,
track_type=TrackType.AUDIO,
codec_name=TrackCodec.DTS,
audio_layout=AudioLayout.LAYOUT_STEREO,
tags={"language": "ger"},
)
screen = object.__new__(MediaEditScreen)
screen.tracksTable = FakeTagTable()
screen._sourceMediaDescriptor = FakeMediaDescriptor([audio_track])
screen._trackRowData = {}
screen._applyNormalization = True
screen.updateTracks()
self.assertIn("Deutsch", screen.tracksTable.rows["row-0"])
def test_media_edit_screen_shows_normalized_video_title_preview(self):
set_current_language("de")
video_track = TrackDescriptor(
index=0,
source_index=0,
sub_index=0,
track_type=TrackType.VIDEO,
codec_name=TrackCodec.H264,
tags={"language": "ger"},
)
screen = object.__new__(MediaEditScreen)
screen.tracksTable = FakeTagTable()
screen._sourceMediaDescriptor = FakeMediaDescriptor([video_track])
screen._trackRowData = {}
screen._applyNormalization = True
screen.updateTracks()
self.assertIn("Deutsch", screen.tracksTable.rows["row-0"])
def test_media_edit_screen_toggle_normalization_refreshes_tracks(self):
screen = object.__new__(MediaEditScreen)
screen._applyNormalization = False
calls = []
screen.setApplyNormalization = lambda enabled: (
setattr(screen, "_applyNormalization", bool(enabled)),
calls.append("setApplyNormalization"),
)
screen.updateToggleButtons = lambda: calls.append("updateToggleButtons")
screen.updateTracks = lambda: calls.append("updateTracks")
screen.updateDifferences = lambda: calls.append("updateDifferences")
screen.setMessage = lambda _message: calls.append("setMessage")
screen.action_toggle_normalization()
self.assertEqual(
[
"setApplyNormalization",
"updateToggleButtons",
"updateTracks",
"updateDifferences",
"setMessage",
],
calls,
)
def test_media_edit_screen_handle_edit_track_updates_draft_descriptor(self):
original_track = TrackDescriptor(
index=1,
source_index=1,
sub_index=0,
track_type=TrackType.SUBTITLE,
codec_name=TrackCodec.UNKNOWN,
tags={"language": "ger"},
)
context = {"logger": get_ffx_logger()}
updated_track = original_track.clone(context=context)
updated_track.getTags()["language"] = "eng"
screen = object.__new__(MediaEditScreen)
screen.context = context
screen._sourceMediaDescriptor = MediaDescriptor(
context=context,
track_descriptors=[original_track],
)
calls = []
screen.setMessage = lambda _message: calls.append("setMessage")
screen.refreshAfterDraftChange = lambda: calls.append("refreshAfterDraftChange")
screen.handle_edit_track(updated_track)
self.assertEqual(
"eng",
screen._sourceMediaDescriptor.getTrackDescriptors()[0].getTags()["language"],
)
self.assertEqual(
["setMessage", "refreshAfterDraftChange"],
calls,
)
def test_media_edit_screen_screen_resume_refreshes_draft_tables(self):
screen = object.__new__(MediaEditScreen)
screen.tracksTable = FakeTagTable()
calls = []
screen.refreshAfterDraftChange = lambda: calls.append("refreshAfterDraftChange")
screen.updateToggleButtons = lambda: calls.append("updateToggleButtons")
screen.on_screen_resume(None)
self.assertEqual(
["refreshAfterDraftChange", "updateToggleButtons"],
calls,
)
def test_pattern_details_screen_screen_resume_refreshes_tables(self):
screen = object.__new__(PatternDetailsScreen)
screen.tracksTable = FakeTagTable()
screen.tagsTable = FakeTagTable()
screen.shiftedSeasonsTable = FakeTagTable()
screen._PatternDetailsScreen__pattern = object()
screen._PatternDetailsScreen__showDescriptor = None
widgets = {
"#show_quality_hint": FakeStaticWidget(),
}
screen.query_one = lambda selector, _type=None: widgets[selector]
calls = []
screen.updateTags = lambda: calls.append("updateTags")
screen.updateTracks = lambda: calls.append("updateTracks")
screen.updateShiftedSeasons = lambda: calls.append("updateShiftedSeasons")
screen.on_screen_resume(None)
self.assertEqual(
["updateTags", "updateTracks", "updateShiftedSeasons"],
calls,
)
def test_pattern_details_screen_on_mount_shows_show_quality_hint_for_new_pattern(self):
set_current_language("en")
screen = object.__new__(PatternDetailsScreen)
screen.context = {}
screen._PatternDetailsScreen__showDescriptor = ShowDescriptor(
id=7,
name="Demo",
year=1999,
quality=23,
)
screen._PatternDetailsScreen__pattern = None
widgets = {
"#showlabel": FakeStaticWidget(),
"#show_quality_hint": FakeStaticWidget(),
}
screen.query_one = lambda selector, _type=None: widgets[selector]
screen.on_mount()
self.assertEqual("7 - Demo (1999)", widgets["#showlabel"].value)
self.assertEqual("Show: 23", widgets["#show_quality_hint"].value)
def test_pattern_details_screen_show_quality_hint_is_hidden_when_pattern_quality_exists(self):
set_current_language("en")
screen = object.__new__(PatternDetailsScreen)
screen._PatternDetailsScreen__showDescriptor = ShowDescriptor(
id=7,
name="Demo",
year=1999,
quality=23,
)
screen._PatternDetailsScreen__pattern = type(
"_Pattern",
(),
{"quality": 19},
)()
self.assertEqual("", screen.getShowQualityHintText())
def test_inspect_details_screen_handle_edit_pattern_refreshes_even_without_result(self):
screen = object.__new__(InspectDetailsScreen)
calls = []
screen.reloadProperties = lambda reset_draft=True: calls.append(
("reloadProperties", reset_draft)
)
screen._currentPattern = None
screen.updateMediaTags = lambda: calls.append("updateMediaTags")
screen.updateTracks = lambda: calls.append("updateTracks")
screen.updateDifferences = lambda: calls.append("updateDifferences")
screen.handle_edit_pattern(None)
self.assertEqual(
[
("reloadProperties", True),
"updateMediaTags",
"updateTracks",
"updateDifferences",
],
calls,
)
def test_pattern_details_screen_reads_selected_shifted_season_from_row_mapping(self):
screen = object.__new__(PatternDetailsScreen)
screen.shiftedSeasonsTable = FakeTagTable()
screen._PatternDetailsScreen__pattern = object()
screen._PatternDetailsScreen__shiftedSeasonRowData = {}
row_key = screen.shiftedSeasonsTable.add_row("9", "1", "3", "1", "0")
screen._PatternDetailsScreen__shiftedSeasonRowData[row_key] = {
"id": 44,
"original_season": 9,
"first_episode": 1,
"last_episode": 3,
"season_offset": 1,
"episode_offset": 0,
}
screen.shiftedSeasonsTable.rows[row_key] = ("broken", "ui", "values", "!", "?")
self.assertEqual(
{
"id": 44,
"original_season": 9,
"first_episode": 1,
"last_episode": 3,
"season_offset": 1,
"episode_offset": 0,
},
screen.getSelectedShiftedSeasonObjFromInput(),
)
def test_show_details_screen_reads_selected_pattern_from_row_mapping(self):
screen = object.__new__(ShowDetailsScreen)
screen.patternTable = FakeTagTable()
screen._ShowDetailsScreen__showDescriptor = make_show_descriptor(7, "Demo", 1999)
screen._ShowDetailsScreen__patternRowData = {}
row_key = screen._add_pattern_row(pattern_id=11, pattern_text=r"^demo_(s[0-9]+e[0-9]+)\.mkv$")
screen.patternTable.rows[row_key] = ("display text changed",)
self.assertEqual(
{
"id": 11,
"show_id": 7,
"pattern": r"^demo_(s[0-9]+e[0-9]+)\.mkv$",
},
screen.getSelectedPatternDescriptor(),
)
def test_show_details_screen_reads_selected_shifted_season_from_row_mapping(self):
screen = object.__new__(ShowDetailsScreen)
screen.shiftedSeasonsTable = FakeTagTable()
screen._ShowDetailsScreen__shiftedSeasonRowData = {}
row_key = screen.shiftedSeasonsTable.add_row("1", "", "", "0", "0")
screen._ShowDetailsScreen__shiftedSeasonRowData[row_key] = {
"id": 3,
"original_season": 1,
"first_episode": -1,
"last_episode": -1,
"season_offset": 0,
"episode_offset": 0,
}
screen.shiftedSeasonsTable.rows[row_key] = ("bad", "visible", "data", "x", "y")
self.assertEqual(
{
"id": 3,
"original_season": 1,
"first_episode": -1,
"last_episode": -1,
"season_offset": 0,
"episode_offset": 0,
},
screen.getSelectedShiftedSeasonObjFromInput(),
)
def test_shows_screen_reads_selected_show_id_from_row_mapping(self):
screen = object.__new__(ShowsScreen)
screen.table = FakeTagTable()
screen._ShowsScreen__showRowData = {}
row_key = screen._add_show_row(make_show_descriptor(4, "Mapped", 2011))
screen.table.rows[row_key] = ("999", "Visible", "2099")
self.assertEqual(4, screen.getSelectedShowId())
def test_inspect_details_screen_reads_selected_show_from_row_mapping(self):
screen = object.__new__(InspectDetailsScreen)
screen.showsTable = FakeTagTable()
screen._showRowData = {}
placeholder_key = screen._add_show_row(None)
show_key = screen._add_show_row(make_show_descriptor(8, "Real Show", 2020))
screen.showsTable.select_row(show_key)
screen.showsTable.rows[show_key] = ("oops", "display", "changed")
selected_show = screen.getSelectedShowDescriptor()
self.assertIsInstance(selected_show, ShowDescriptor)
self.assertEqual(8, selected_show.getId())
self.assertEqual(0, screen.getRowIndexFromShowId(-1))
self.assertEqual(1, screen.getRowIndexFromShowId(8))
screen.removeShow(-1)
self.assertNotIn(placeholder_key, screen._showRowData)
self.assertEqual(0, screen.getRowIndexFromShowId(8))
def test_inspect_details_screen_update_tracks_shows_target_pattern_tracks(self):
source_track = TrackDescriptor(
index=1,
source_index=1,
sub_index=0,
track_type=TrackType.SUBTITLE,
codec_name=TrackCodec.UNKNOWN,
tags={"language": "ger", "title": "German Full"},
)
target_track = TrackDescriptor(
index=1,
source_index=1,
sub_index=0,
track_type=TrackType.SUBTITLE,
codec_name=TrackCodec.UNKNOWN,
tags={"language": "eng", "title": "English Full"},
)
screen = object.__new__(InspectDetailsScreen)
screen.tracksTable = FakeTagTable()
screen._sourceMediaDescriptor = FakeMediaDescriptor([source_track])
screen._targetMediaDescriptor = FakeMediaDescriptor([target_track])
screen._currentPattern = object()
screen._trackRowData = {}
screen._applyNormalization = False
screen.updateTracks()
self.assertIn("English Full", screen.tracksTable.rows["row-0"])
self.assertIs(target_track, screen.getSelectedTrackDescriptor())
def test_inspect_details_screen_update_tracks_shows_attachment_format_and_blanks_language(self):
attachment_track = TrackDescriptor(
index=4,
source_index=4,
sub_index=0,
track_type=TrackType.ATTACHMENT,
attachment_format=AttachmentFormat.TTF,
tags={"filename": "font.ttf", "mimetype": "font/ttf"},
)
screen = object.__new__(InspectDetailsScreen)
screen.tracksTable = FakeTagTable()
screen._sourceMediaDescriptor = FakeMediaDescriptor([attachment_track])
screen._targetMediaDescriptor = None
screen._currentPattern = None
screen._trackRowData = {}
screen._applyNormalization = False
screen.updateTracks()
row = screen.tracksTable.rows["row-0"]
self.assertEqual("4", row[0])
self.assertEqual("TTF", row[3])
self.assertEqual(" ", row[5])
self.assertEqual(" ", row[7])
self.assertEqual(" ", row[8])
def test_inspect_details_screen_update_tracks_shows_unknown_for_unknown_attachment_format(self):
attachment_track = TrackDescriptor(
index=5,
source_index=5,
sub_index=0,
track_type=TrackType.ATTACHMENT,
attachment_format=AttachmentFormat.UNKNOWN,
tags={"filename": "blob.bin", "mimetype": "application/octet-stream"},
)
screen = object.__new__(InspectDetailsScreen)
screen.tracksTable = FakeTagTable()
screen._sourceMediaDescriptor = FakeMediaDescriptor([attachment_track])
screen._targetMediaDescriptor = None
screen._currentPattern = None
screen._trackRowData = {}
screen._applyNormalization = False
screen.updateTracks()
row = screen.tracksTable.rows["row-0"]
self.assertEqual("unknown", row[3])
self.assertEqual(" ", row[5])
def test_inspect_details_screen_maps_target_selection_back_to_source_track(self):
source_track = TrackDescriptor(
index=3,
source_index=7,
sub_index=1,
track_type=TrackType.SUBTITLE,
codec_name=TrackCodec.UNKNOWN,
tags={"language": "ger"},
)
target_track = TrackDescriptor(
index=1,
source_index=7,
sub_index=0,
track_type=TrackType.SUBTITLE,
codec_name=TrackCodec.UNKNOWN,
tags={"language": "eng"},
)
screen = object.__new__(InspectDetailsScreen)
screen.tracksTable = FakeTagTable()
screen._sourceMediaDescriptor = FakeMediaDescriptor([source_track])
screen._targetMediaDescriptor = FakeMediaDescriptor([target_track])
screen._currentPattern = object()
screen._trackRowData = {}
screen._applyNormalization = False
screen.updateTracks()
self.assertIs(source_track, screen.getTrackEditSourceDescriptor())
def test_inspect_details_screen_action_update_pattern_uses_existing_change_set_before_reload(self):
class _FakePattern:
def getPattern(self):
return r"demo_(s[0-9]+e[0-9]+)\.mkv"
def getId(self):
return 9
class _FakeTagController:
def __init__(self, calls):
self._calls = calls
def deleteMediaTagByKey(self, pattern_id, key):
self._calls.append(("deleteMediaTagByKey", pattern_id, key))
calls = []
screen = object.__new__(InspectDetailsScreen)
screen._currentPattern = _FakePattern()
screen._mediaChangeSetObj = {
"tags": {
DIFF_ADDED_KEY: {"TITLE": "Demo"},
}
}
screen._tac = _FakeTagController(calls)
screen._tc = type(
"_FakeTrackController",
(),
{
"addTrack": staticmethod(lambda *_args, **_kwargs: None),
"deleteTrack": staticmethod(lambda *_args, **_kwargs: None),
"setDispositionState": staticmethod(lambda *_args, **_kwargs: None),
},
)()
screen._sourceMediaDescriptor = FakeMediaDescriptor([], tags={})
screen._targetMediaDescriptor = FakeMediaDescriptor([])
screen.getPatternObjFromInput = lambda: {
"show_id": 1,
"pattern": r"demo_(s[0-9]+e[0-9]+)\.mkv",
}
screen.reloadProperties = lambda reset_draft=True: calls.append(
("reloadProperties", reset_draft)
)
screen.updateMediaTags = lambda: calls.append("updateMediaTags")
screen.updateTracks = lambda: calls.append("updateTracks")
screen.updateDifferences = lambda: calls.append("updateDifferences")
screen.action_update_pattern()
self.assertEqual(
[
("deleteMediaTagByKey", 9, "TITLE"),
("reloadProperties", True),
"updateMediaTags",
"updateTracks",
"updateDifferences",
],
calls,
)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,25 @@
from __future__ import annotations
from pathlib import Path
import sys
import unittest
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx.track_codec import TrackCodec # noqa: E402
class TrackCodecIdentificationTests(unittest.TestCase):
def test_identify_modern_webm_codecs(self):
self.assertEqual(TrackCodec.VP9, TrackCodec.identify("vp9"))
self.assertEqual(TrackCodec.OPUS, TrackCodec.identify("opus"))
self.assertEqual(TrackCodec.WEBVTT, TrackCodec.identify("webvtt"))
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,61 @@
from __future__ import annotations
from pathlib import Path
import sys
import unittest
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
if str(SRC_ROOT) not in sys.path:
sys.path.insert(0, str(SRC_ROOT))
from ffx.attachment_format import AttachmentFormat # noqa: E402
from ffx.track_codec import TrackCodec # noqa: E402
from ffx.track_descriptor import TrackDescriptor # noqa: E402
from ffx.track_type import TrackType # noqa: E402
class TrackDescriptorProbeTests(unittest.TestCase):
def test_attachment_without_codec_name_uses_font_metadata_to_identify_ttf(self):
descriptor = TrackDescriptor.fromFfprobe(
{
"index": 4,
"codec_type": "attachment",
"disposition": {"default": 0},
"tags": {
"filename": "AmazonEmberTanuki-Italic.ttf",
"mimetype": "font/ttf",
},
},
subIndex=0,
)
self.assertIsNotNone(descriptor)
self.assertEqual(TrackType.ATTACHMENT, descriptor.getType())
self.assertEqual(AttachmentFormat.TTF, descriptor.getAttachmentFormat())
self.assertEqual(AttachmentFormat.TTF, descriptor.getFormatDescriptor())
self.assertEqual(TrackCodec.UNKNOWN, descriptor.getCodec())
def test_attachment_without_codec_name_still_probes_as_unknown_when_not_font(self):
descriptor = TrackDescriptor.fromFfprobe(
{
"index": 9,
"codec_type": "attachment",
"disposition": {"default": 0},
"tags": {
"filename": "cover.bin",
"mimetype": "application/octet-stream",
},
},
subIndex=0,
)
self.assertIsNotNone(descriptor)
self.assertEqual(TrackType.ATTACHMENT, descriptor.getType())
self.assertEqual(AttachmentFormat.UNKNOWN, descriptor.getAttachmentFormat())
self.assertEqual(TrackCodec.UNKNOWN, descriptor.getCodec())
if __name__ == "__main__":
unittest.main()

View File

@@ -369,6 +369,7 @@ from ffx.constants import (
DEFAULT_SHOW_INDICATOR_EPISODE_DIGITS,
DEFAULT_SHOW_INDICATOR_SEASON_DIGITS,
)
from ffx.i18n import resolve_application_language
template_path = Path(os.environ["FFX_CONFIG_TEMPLATE_FILE"])
environment = Environment(
@@ -381,6 +382,7 @@ template = environment.get_template(template_path.name)
sys.stdout.write(
template.render(
language_json=json.dumps(resolve_application_language()),
database_path_json=json.dumps(os.environ["FFX_DATABASE_PATH"]),
log_directory_json=json.dumps(os.environ["FFX_LOG_DIRECTORY"]),
subtitles_directory_json=json.dumps(os.environ["FFX_SUBTITLES_DIRECTORY"]),

View File

@@ -0,0 +1,299 @@
#!/usr/bin/env python3
from __future__ import annotations
import ast
import gettext
import json
from pathlib import Path
from ffx.i18n import SUPPORTED_LANGUAGES
from ffx.iso_language import IsoLanguage
REPO_ROOT = Path(__file__).resolve().parents[1]
SOURCE_ROOT = REPO_ROOT / "src" / "ffx"
OUTPUT_ROOT = REPO_ROOT / "assets" / "i18n"
LANGUAGE_CODES = ("de", "en", "fr", "ja", "nb", "eo", "ta", "pt", "es")
TRANSLATED_LANGUAGE_CODES = ("de", "fr", "ja", "nb", "eo", "ta", "pt", "es")
EXTRA_PHRASES = {
"Differences",
"Differences (file->db/output)",
"Planned Changes (file->edited output)",
"video",
"audio",
"subtitle",
"attachment",
"unknown",
"default",
"forced",
"dub",
"original",
"comment",
"lyrics",
"karaoke",
"hearing_impaired",
"visual_impaired",
"clean_effects",
"attached_pic",
"timed_thumbnails",
"non_diegetic",
"captions",
"descriptions",
"metadata",
"dependent",
"still_image",
"stereo",
"5.1(side)",
"6.1",
"7.1",
"6ch",
"5.0(side)",
"undefined",
}
PHRASE_ROWS = [
("<New show>", "<Neue Serie>", "<Nouvelle série>", "<新しい番組>", "<Ny serie>", "<Nova serio>", "<புதிய தொடர்>", "<Nova série>", "<Nueva serie>"),
("Add", "Hinzufügen", "Ajouter", "追加", "Legg til", "Aldoni", "சேர்", "Adicionar", "Añadir"),
("Add Pattern", "Muster hinzufügen", "Ajouter un modèle", "パターンを追加", "Legg til mønster", "Aldoni ŝablonon", "வடிவத்தை சேர்", "Adicionar padrão", "Añadir patrón"),
("Apply", "Anwenden", "Appliquer", "適用", "Bruk", "Apliki", "பயன்படுத்து", "Aplicar", "Aplicar"),
("Apply failed: {error}", "Anwenden fehlgeschlagen: {error}", "Échec de l'application : {error}", "適用に失敗しました: {error}", "Kunne ikke bruke endringene: {error}", "Apliko malsukcesis: {error}", "பயன்படுத்தல் தோல்வியடைந்தது: {error}", "Falha ao aplicar: {error}", "Error al aplicar: {error}"),
("Are you sure to delete the following filename pattern?", "Möchtest du das folgende Dateinamensmuster wirklich löschen?", "Voulez-vous vraiment supprimer le modèle de nom de fichier suivant ?", "次のファイル名パターンを削除してもよろしいですか?", "Er du sikker på at du vil slette følgende filnavnmønster?", "Ĉu vi certe volas forigi la jenan dosiernoman ŝablonon?", "பின்வரும் கோப்பு பெயர் வடிவத்தை நீக்க விரும்புகிறீர்களா?", "Tem certeza de que deseja excluir o seguinte padrão de nome de arquivo?", "¿Seguro que quieres eliminar el siguiente patrón de nombre de archivo?"),
("Are you sure to delete the following shifted season?", "Möchtest du die folgende verschobene Staffel wirklich löschen?", "Voulez-vous vraiment supprimer la saison décalée suivante ?", "次のシーズンシフト設定を削除してもよろしいですか?", "Er du sikker på at du vil slette følgende forskjøvede sesong?", "Ĉu vi certe volas forigi la jenan ŝovitan sezonon?", "பின்வரும் மாற்றிய சீசனை நீக்க விரும்புகிறீர்களா?", "Tem certeza de que deseja excluir a seguinte temporada deslocada?", "¿Seguro que quieres eliminar la siguiente temporada desplazada?"),
("Are you sure to delete the following show?", "Möchtest du die folgende Serie wirklich löschen?", "Voulez-vous vraiment supprimer la série suivante ?", "次の番組を削除してもよろしいですか?", "Er du sikker på at du vil slette følgende serie?", "Ĉu vi certe volas forigi la jenan serion?", "பின்வரும் தொடரை நீக்க விரும்புகிறீர்களா?", "Tem certeza de que deseja excluir a seguinte série?", "¿Seguro que quieres eliminar la siguiente serie?"),
("Are you sure to delete the following {track_type} track?", "Möchtest du den folgenden {track_type}-Stream wirklich löschen?", "Voulez-vous vraiment supprimer la piste {track_type} suivante ?", "次の{track_type}ストリームを削除してもよろしいですか?", "Er du sikker på at du vil slette følgende {track_type}-spor?", "Ĉu vi certe volas forigi la jenan {track_type}-trakon?", "பின்வரும் {track_type} ஸ்ட்ரீமை நீக்க விரும்புகிறீர்களா?", "Tem certeza de que deseja excluir a seguinte faixa {track_type}?", "¿Seguro que quieres eliminar la pista {track_type} siguiente?"),
("Are you sure to delete this tag?", "Möchtest du dieses Tag wirklich löschen?", "Voulez-vous vraiment supprimer cette balise ?", "このタグを削除してもよろしいですか?", "Er du sikker på at du vil slette denne taggen?", "Ĉu vi certe volas forigi ĉi tiun etikedon?", "இந்த குறிச்சொல்லை நீக்க விரும்புகிறீர்களா?", "Tem certeza de que deseja excluir esta tag?", "¿Seguro que quieres eliminar esta etiqueta?"),
("Audio Layout", "Audiolayout", "Disposition audio", "音声レイアウト", "Lydoppsett", "Aŭda aranĝo", "ஒலி அமைப்பு", "Layout de áudio", "Disposición de audio"),
("Back", "Zurück", "Retour", "戻る", "Tilbake", "Reen", "பின்", "Voltar", "Volver"),
("Cancel", "Abbrechen", "Annuler", "キャンセル", "Avbryt", "Nuligi", "ரத்து", "Cancelar", "Cancelar"),
("Cannot add another stream with disposition flag 'default' or 'forced' set", "Es kann kein weiterer Stream mit gesetztem Dispositions-Flag 'default' oder 'forced' hinzugefügt werden", "Impossible d'ajouter un autre flux avec l'indicateur de disposition 'default' ou 'forced'", "default または forced の disposition が設定されたストリームはこれ以上追加できません", "Kan ikke legge til en ny strøm med disposisjonsflagget 'default' eller 'forced' satt", "Ne eblas aldoni alian fluon kun la dispozicia flago 'default''forced' aktiva", "'default' அல்லது 'forced' disposition கொடி அமைந்த மற்றொரு ஸ்ட்ரீமை சேர்க்க முடியாது", "Não é possível adicionar outro fluxo com a flag de disposição 'default' ou 'forced' definida", "No se puede añadir otro flujo con la marca de disposición 'default' o 'forced' activada"),
("Changes applied and file reloaded.", "Änderungen angewendet und Datei neu geladen.", "Modifications appliquées et fichier rechargé.", "変更を適用し、ファイルを再読み込みしました。", "Endringene er brukt og filen er lastet inn på nytt.", "Ŝanĝoj aplikitaj kaj dosiero reŝargita.", "மாற்றங்கள் பயன்படுத்தப்பட்டு கோப்பு மீளேற்றப்பட்டது.", "Alterações aplicadas e arquivo recarregado.", "Cambios aplicados y archivo recargado."),
("Cleanup", "Bereinigen", "Nettoyage", "クリーンアップ", "Rydd opp", "Purigado", "சுத்திகரிப்பு", "Limpeza", "Limpieza"),
("Cleanup disabled.", "Bereinigung deaktiviert.", "Nettoyage désactivé.", "クリーンアップを無効にしました。", "Rydding deaktivert.", "Purigado malŝaltita.", "சுத்திகரிப்பு முடக்கப்பட்டது.", "Limpeza desativada.", "Limpieza desactivada."),
("Cleanup enabled.", "Bereinigung aktiviert.", "Nettoyage activé.", "クリーンアップを有効にしました。", "Rydding aktivert.", "Purigado ŝaltita.", "சுத்திகரிப்பு இயக்கப்பட்டது.", "Limpeza ativada.", "Limpieza activada."),
("Codec", "Codec", "Codec", "コーデック", "Kodek", "Kodeko", "கோடெக்", "Codec", "Códec"),
("Continuing edit session.", "Bearbeitung wird fortgesetzt.", "Poursuite de la session d'édition.", "編集セッションを続行します。", "Fortsetter redigeringsøkten.", "Daŭrigante la redaktan seancon.", "திருத்த அமர்வு தொடர்கிறது.", "Continuando a sessão de edição.", "Continuando la sesión de edición."),
("Default", "Standard", "Par défaut", "デフォルト", "Standard", "Defaŭlta", "இயல்புநிலை", "Padrão", "Predeterminado"),
("Delete", "Löschen", "Supprimer", "削除", "Slett", "Forigi", "நீக்கு", "Excluir", "Eliminar"),
("Delete Show", "Serie löschen", "Supprimer la série", "番組を削除", "Slett serie", "Forigi serion", "தொடரை நீக்கு", "Excluir série", "Eliminar serie"),
("Deleted media tag {tag!r}.", "Medien-Tag {tag!r} gelöscht.", "Balise média {tag!r} supprimée.", "メディアタグ {tag!r} を削除しました。", "Mediataggen {tag!r} ble slettet.", "Forigis la aŭdvidan etikedon {tag!r}.", "மீடியா குறிச்சொல் {tag!r} நீக்கப்பட்டது.", "Tag de mídia {tag!r} excluída.", "Etiqueta de medios {tag!r} eliminada."),
("Discard", "Verwerfen", "Ignorer", "破棄", "Forkast", "Forĵeti", "கைவிடு", "Descartar", "Descartar"),
("Discard pending metadata changes and quit?", "Ausstehende Metadatenänderungen verwerfen und beenden?", "Ignorer les modifications de métadonnées en attente et quitter ?", "保留中のメタデータ変更を破棄して終了しますか?", "Forkaste ventende metadataendringer og avslutte?", "Ĉu forĵeti atendatajn metadatumajn ŝanĝojn kaj eliri?", "நிலுவையில் உள்ள மெட்டாடேட்டா மாற்றங்களை கைவிட்டு வெளியேறவா?", "Descartar alterações pendentes de metadados e sair?", "¿Descartar los cambios pendientes de metadatos y salir?"),
("Discard pending metadata changes and reload the file state?", "Ausstehende Metadatenänderungen verwerfen und Dateistand neu laden?", "Ignorer les modifications de métadonnées en attente et recharger l'état du fichier ?", "保留中のメタデータ変更を破棄してファイル状態を再読み込みしますか?", "Forkaste ventende metadataendringer og laste filtilstanden på nytt?", "Ĉu forĵeti atendatajn metadatumajn ŝanĝojn kaj reŝargi la dosieran staton?", "நிலுவையில் உள்ள மெட்டாடேட்டா மாற்றங்களை கைவிட்டு கோப்பு நிலையை மீளேற்றவா?", "Descartar alterações pendentes de metadados e recarregar o estado do arquivo?", "¿Descartar los cambios pendientes de metadatos y recargar el estado del archivo?"),
("Differences", "Unterschiede", "Différences", "差分", "Forskjeller", "Diferencoj", "வேறுபாடுகள்", "Diferenças", "Diferencias"),
("Differences (file->db/output)", "Unterschiede (Datei->DB/Ausgabe)", "Différences (fichier->BD/sortie)", "差分 (ファイル->DB/出力)", "Forskjeller (fil->DB/utdata)", "Diferencoj (dosiero->DB/eligo)", "வேறுபாடுகள் (கோப்பு->DB/வெளியீடு)", "Diferenças (arquivo->BD/saída)", "Diferencias (archivo->BD/salida)"),
("Down", "Runter", "Descendre", "下へ", "Ned", "Malsupren", "கீழ்", "Baixo", "Abajo"),
("Dry-run: would rewrite via temporary file {target_path}", "Trockenlauf: würde über temporäre Datei {target_path} neu schreiben", "Simulation : réécrirait via le fichier temporaire {target_path}", "ドライラン: 一時ファイル {target_path} 経由で再書き込みします", "Tørrkjøring: ville skrevet om via midlertidig fil {target_path}", "Seka provo: reskribus per provizora dosiero {target_path}", "Dry-run: தற்காலிக கோப்பு {target_path} வழியாக மறுஎழுதப்படும்", "Execução simulada: regravaria via arquivo temporário {target_path}", "Simulación: reescribiría mediante el archivo temporal {target_path}"),
("Edit", "Bearbeiten", "Modifier", "編集", "Rediger", "Redakti", "திருத்து", "Editar", "Editar"),
("Edit Pattern", "Muster bearbeiten", "Modifier le modèle", "パターンを編集", "Rediger mønster", "Redakti ŝablonon", "வடிவத்தை திருத்து", "Editar padrão", "Editar patrón"),
("Edit Show", "Serie bearbeiten", "Modifier la série", "番組を編集", "Rediger serie", "Redakti serion", "தொடரை திருத்து", "Editar série", "Editar serie"),
("Edit filename pattern", "Dateinamensmuster bearbeiten", "Modifier le modèle de nom de fichier", "ファイル名パターンを編集", "Rediger filnavnmønster", "Redakti dosiernoman ŝablonon", "கோப்பு பெயர் வடிவத்தை திருத்து", "Editar padrão de nome de arquivo", "Editar patrón de nombre de archivo"),
("Edit shifted season", "Verschobene Staffel bearbeiten", "Modifier la saison décalée", "シフト済みシーズンを編集", "Rediger forskjøvet sesong", "Redakti ŝovitan sezonon", "மாற்றிய சீசனை திருத்து", "Editar temporada deslocada", "Editar temporada desplazada"),
("Edit stream", "Stream bearbeiten", "Modifier le flux", "ストリームを編集", "Rediger strøm", "Redakti fluon", "ஸ்ட்ரீமை திருத்து", "Editar fluxo", "Editar flujo"),
("Episode Offset", "Episodenoffset", "Décalage d'épisode", "エピソードオフセット", "Episodeforskyvning", "Epizoda deŝovo", "அத்தியாய இடச்சரிவு", "Deslocamento de episódio", "Desplazamiento de episodio"),
("Episode offset", "Episodenoffset", "Décalage d'épisode", "エピソードオフセット", "Episodeforskyvning", "Epizoda deŝovo", "அத்தியாய இடச்சரிவு", "Deslocamento de episódio", "Desplazamiento de episodio"),
("File", "Datei", "Fichier", "ファイル", "Fil", "Dosiero", "கோப்பு", "Arquivo", "Archivo"),
("File patterns", "Dateimuster", "Modèles de fichiers", "ファイルパターン", "Filmønstre", "Dosieraj ŝablonoj", "கோப்பு வடிவங்கள்", "Padrões de arquivo", "Patrones de archivo"),
("First Episode", "Erste Episode", "Premier épisode", "最初のエピソード", "Første episode", "Unua epizodo", "முதல் அத்தியாயம்", "Primeiro episódio", "Primer episodio"),
("First episode", "Erste Episode", "Premier épisode", "最初のエピソード", "Første episode", "Unua epizodo", "முதல் அத்தியாயம்", "Primeiro episódio", "Primer episodio"),
("Forced", "Erzwungen", "Forcé", "強制", "Tvungen", "Devigita", "கட்டாயம்", "Forçado", "Forzado"),
("Help", "Hilfe", "Aide", "ヘルプ", "Hjelp", "Helpo", "உதவி", "Ajuda", "Ayuda"),
("Help Screen", "Hilfe-Bildschirm", "Écran d'aide", "ヘルプ画面", "Hjelpeskjerm", "Helpa ekrano", "உதவி திரை", "Tela de ajuda", "Pantalla de ayuda"),
("ID", "ID", "ID", "ID", "ID", "ID", "அடையாளம்", "ID", "ID"),
("Identify", "Identifizieren", "Identifier", "識別", "Identifiser", "Identigi", "அடையாளம் காட்டு", "Identificar", "Identificar"),
("Index", "Index", "Index", "インデックス", "Indeks", "Indekso", "சுட்டி", "Índice", "Índice"),
("Index / Subindex", "Index / Unterindex", "Index / Sous-index", "インデックス / サブインデックス", "Indeks / Underindeks", "Indekso / Subindekso", "சுட்டி / துணைச்சுட்டி", "Índice / Subíndice", "Índice / Subíndice"),
("Index Episode Digits", "Index-Episodenziffern", "Chiffres d'épisode d'index", "インデックスのエピソード桁数", "Siffer for episodeindeks", "Ciferoj de epizoda indekso", "அத்தியாய சுட்டி இலக்கங்கள்", "Dígitos do índice do episódio", "Dígitos del índice de episodio"),
("Index Season Digits", "Index-Staffelziffern", "Chiffres de saison d'index", "インデックスのシーズン桁数", "Siffer for sesongindeks", "Ciferoj de sezona indekso", "சீசன் சுட்டி இலக்கங்கள்", "Dígitos do índice da temporada", "Dígitos del índice de temporada"),
("Indicator Edisode Digits", "Indikator-Episodenziffern", "Chiffres d'épisode de l'indicateur", "インジケーターのエピソード桁数", "Siffer for episodeindikator", "Ciferoj de epizoda indikilo", "அத்தியாய குறியீட்டு இலக்கங்கள்", "Dígitos do indicador do episódio", "Dígitos del indicador de episodio"),
("Indicator Season Digits", "Indikator-Staffelziffern", "Chiffres de saison de l'indicateur", "インジケーターのシーズン桁数", "Siffer for sesongindikator", "Ciferoj de sezona indikilo", "சீசன் குறியீட்டு இலக்கங்கள்", "Dígitos do indicador da temporada", "Dígitos del indicador de temporada"),
("Keep Editing", "Weiter bearbeiten", "Continuer l'édition", "編集を続ける", "Fortsett redigeringen", "Daŭrigi redaktadon", "திருத்தலை தொடரு", "Continuar editando", "Seguir editando"),
("Keeping pending changes.", "Ausstehende Änderungen bleiben erhalten.", "Les modifications en attente sont conservées.", "保留中の変更を保持します。", "Beholder ventende endringer.", "Konservas atendatajn ŝanĝojn.", "நிலுவையில் உள்ள மாற்றங்கள் வைக்கப்படுகின்றன.", "Mantendo alterações pendentes.", "Se conservan los cambios pendientes."),
("Key", "Schlüssel", "Clé", "キー", "Nøkkel", "Ŝlosilo", "சாவி", "Chave", "Clave"),
("Language", "Sprache", "Langue", "言語", "Språk", "Lingvo", "மொழி", "Idioma", "Idioma"),
("Last Episode", "Letzte Episode", "Dernier épisode", "最後のエピソード", "Siste episode", "Lasta epizodo", "கடைசி அத்தியாயம்", "Último episódio", "Último episodio"),
("Last episode", "Letzte Episode", "Dernier épisode", "最後のエピソード", "Siste episode", "Lasta epizodo", "கடைசி அத்தியாயம்", "Último episódio", "Último episodio"),
("Layout", "Layout", "Disposition", "レイアウト", "Oppsett", "Aranĝo", "அமைப்பு", "Layout", "Diseño"),
("Media Tags", "Medien-Tags", "Balises média", "メディアタグ", "Mediatagger", "Aŭdvidaj etikedoj", "மீடியா குறிச்சொற்கள்", "Tags de mídia", "Etiquetas de medios"),
("More than one default audio stream detected and no prompt set", "Mehr als ein Standard-Audiostream erkannt und keine Abfrage aktiviert", "Plus d'un flux audio par défaut détecté et aucune invite définie", "デフォルト音声ストリームが複数検出され、プロンプトも設定されていません", "Mer enn én standard lydstrøm funnet og ingen forespørsel satt", "Pli ol unu defaŭlta sonfluo detektita kaj neniu instigo agordita", "ஒருக்கும் மேற்பட்ட இயல்புநிலை ஒலி ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை", "Mais de um fluxo de áudio padrão detectado e nenhum prompt definido", "Se detectó más de un flujo de audio predeterminado y no hay aviso configurado"),
("More than one default audio stream detected! Please select stream", "Mehr als ein Standard-Audiostream erkannt! Bitte Stream auswählen", "Plus d'un flux audio par défaut détecté ! Veuillez sélectionner un flux", "デフォルト音声ストリームが複数検出されました。ストリームを選択してください", "Mer enn én standard lydstrøm funnet. Velg strøm", "Pli ol unu defaŭlta sonfluo detektita! Bonvolu elekti fluon", "ஒருக்கும் மேற்பட்ட இயல்புநிலை ஒலி ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்", "Mais de um fluxo de áudio padrão detectado! Selecione o fluxo", "Se detectó más de un flujo de audio predeterminado. Selecciona el flujo"),
("More than one default subtitle stream detected and no prompt set", "Mehr als ein Standard-Untertitelstream erkannt und keine Abfrage aktiviert", "Plus d'un flux de sous-titres par défaut détecté et aucune invite définie", "デフォルト字幕ストリームが複数検出され、プロンプトも設定されていません", "Mer enn én standard undertekststrøm funnet og ingen forespørsel satt", "Pli ol unu defaŭlta subtitola fluo detektita kaj neniu instigo agordita", "ஒருக்கும் மேற்பட்ட இயல்புநிலை வசன ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை", "Mais de um fluxo de legenda padrão detectado e nenhum prompt definido", "Se detectó más de un flujo de subtítulos predeterminado y no hay aviso configurado"),
("More than one default subtitle stream detected! Please select stream", "Mehr als ein Standard-Untertitelstream erkannt! Bitte Stream auswählen", "Plus d'un flux de sous-titres par défaut détecté ! Veuillez sélectionner un flux", "デフォルト字幕ストリームが複数検出されました。ストリームを選択してください", "Mer enn én standard undertekststrøm funnet. Velg strøm", "Pli ol unu defaŭlta subtitola fluo detektita! Bonvolu elekti fluon", "ஒருக்கும் மேற்பட்ட இயல்புநிலை வசன ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்", "Mais de um fluxo de legenda padrão detectado! Selecione o fluxo", "Se detectó más de un flujo de subtítulos predeterminado. Selecciona el flujo"),
("More than one default video stream detected and no prompt set", "Mehr als ein Standard-Videostream erkannt und keine Abfrage aktiviert", "Plus d'un flux vidéo par défaut détecté et aucune invite définie", "デフォルト映像ストリームが複数検出され、プロンプトも設定されていません", "Mer enn én standard videostrøm funnet og ingen forespørsel satt", "Pli ol unu defaŭlta videofluo detektita kaj neniu instigo agordita", "ஒருக்கும் மேற்பட்ட இயல்புநிலை வீடியோ ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை", "Mais de um fluxo de vídeo padrão detectado e nenhum prompt definido", "Se detectó más de un flujo de vídeo predeterminado y no hay aviso configurado"),
("More than one default video stream detected! Please select stream", "Mehr als ein Standard-Videostream erkannt! Bitte Stream auswählen", "Plus d'un flux vidéo par défaut détecté ! Veuillez sélectionner un flux", "デフォルト映像ストリームが複数検出されました。ストリームを選択してください", "Mer enn én standard videostrøm funnet. Velg strøm", "Pli ol unu defaŭlta videofluo detektita! Bonvolu elekti fluon", "ஒருக்கும் மேற்பட்ட இயல்புநிலை வீடியோ ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்", "Mais de um fluxo de vídeo padrão detectado! Selecione o fluxo", "Se detectó más de un flujo de vídeo predeterminado. Selecciona el flujo"),
("More than one forced audio stream detected and no prompt set", "Mehr als ein erzwungener Audiostream erkannt und keine Abfrage aktiviert", "Plus d'un flux audio forcé détecté et aucune invite définie", "強制音声ストリームが複数検出され、プロンプトも設定されていません", "Mer enn én tvungen lydstrøm funnet og ingen forespørsel satt", "Pli ol unu devigita sonfluo detektita kaj neniu instigo agordita", "ஒருக்கும் மேற்பட்ட கட்டாய ஒலி ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை", "Mais de um fluxo de áudio forçado detectado e nenhum prompt definido", "Se detectó más de un flujo de audio forzado y no hay aviso configurado"),
("More than one forced audio stream detected! Please select stream", "Mehr als ein erzwungener Audiostream erkannt! Bitte Stream auswählen", "Plus d'un flux audio forcé détecté ! Veuillez sélectionner un flux", "強制音声ストリームが複数検出されました。ストリームを選択してください", "Mer enn én tvungen lydstrøm funnet. Velg strøm", "Pli ol unu devigita sonfluo detektita! Bonvolu elekti fluon", "ஒருக்கும் மேற்பட்ட கட்டாய ஒலி ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்", "Mais de um fluxo de áudio forçado detectado! Selecione o fluxo", "Se detectó más de un flujo de audio forzado. Selecciona el flujo"),
("More than one forced subtitle stream detected and no prompt set", "Mehr als ein erzwungener Untertitelstream erkannt und keine Abfrage aktiviert", "Plus d'un flux de sous-titres forcé détecté et aucune invite définie", "強制字幕ストリームが複数検出され、プロンプトも設定されていません", "Mer enn én tvungen undertekststrøm funnet og ingen forespørsel satt", "Pli ol unu devigita subtitola fluo detektita kaj neniu instigo agordita", "ஒருக்கும் மேற்பட்ட கட்டாய வசன ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை", "Mais de um fluxo de legenda forçada detectado e nenhum prompt definido", "Se detectó más de un flujo de subtítulos forzados y no hay aviso configurado"),
("More than one forced subtitle stream detected! Please select stream", "Mehr als ein erzwungener Untertitelstream erkannt! Bitte Stream auswählen", "Plus d'un flux de sous-titres forcé détecté ! Veuillez sélectionner un flux", "強制字幕ストリームが複数検出されました。ストリームを選択してください", "Mer enn én tvungen undertekststrøm funnet. Velg strøm", "Pli ol unu devigita subtitola fluo detektita! Bonvolu elekti fluon", "ஒருக்கும் மேற்பட்ட கட்டாய வசன ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்", "Mais de um fluxo de legenda forçada detectado! Selecione o fluxo", "Se detectó más de un flujo de subtítulos forzados. Selecciona el flujo"),
("More than one forced video stream detected and no prompt set", "Mehr als ein erzwungener Videostream erkannt und keine Abfrage aktiviert", "Plus d'un flux vidéo forcé détecté et aucune invite définie", "強制映像ストリームが複数検出され、プロンプトも設定されていません", "Mer enn én tvungen videostrøm funnet og ingen forespørsel satt", "Pli ol unu devigita videofluo detektita kaj neniu instigo agordita", "ஒருக்கும் மேற்பட்ட கட்டாய வீடியோ ஸ்ட்ரீம்கள் கண்டறியப்பட்டன, மேலும் எந்த prompt-வும் அமைக்கப்படவில்லை", "Mais de um fluxo de vídeo forçado detectado e nenhum prompt definido", "Se detectó más de un flujo de vídeo forzado y no hay aviso configurado"),
("More than one forced video stream detected! Please select stream", "Mehr als ein erzwungener Videostream erkannt! Bitte Stream auswählen", "Plus d'un flux vidéo forcé détecté ! Veuillez sélectionner un flux", "強制映像ストリームが複数検出されました。ストリームを選択してください", "Mer enn én tvungen videostrøm funnet. Velg strøm", "Pli ol unu devigita videofluo detektita! Bonvolu elekti fluon", "ஒருக்கும் மேற்பட்ட கட்டாய வீடியோ ஸ்ட்ரீம்கள் கண்டறியப்பட்டன! ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்", "Mais de um fluxo de vídeo forçado detectado! Selecione o fluxo", "Se detectó más de un flujo de vídeo forzado. Selecciona el flujo"),
("Name", "Name", "Nom", "名前", "Navn", "Nomo", "பெயர்", "Nome", "Nombre"),
("New Pattern", "Neues Muster", "Nouveau modèle", "新しいパターン", "Nytt mønster", "Nova ŝablono", "புதிய வடிவம்", "Novo padrão", "Nuevo patrón"),
("New Show", "Neue Serie", "Nouvelle série", "新しい番組", "Ny serie", "Nova serio", "புதிய தொடர்", "Nova série", "Nueva serie"),
("New filename pattern", "Neues Dateinamensmuster", "Nouveau modèle de nom de fichier", "新しいファイル名パターン", "Nytt filnavnmønster", "Nova dosiernoma ŝablono", "புதிய கோப்பு பெயர் வடிவம்", "Novo padrão de nome de arquivo", "Nuevo patrón de nombre de archivo"),
("New shifted season", "Neue verschobene Staffel", "Nouvelle saison décalée", "新しいシーズンシフト", "Ny forskjøvet sesong", "Nova ŝovita sezono", "புதிய மாற்றிய சீசன்", "Nova temporada deslocada", "Nueva temporada desplazada"),
("New stream", "Neuer Stream", "Nouveau flux", "新しいストリーム", "Ny strøm", "Nova fluo", "புதிய ஸ்ட்ரீம்", "Novo fluxo", "Nuevo flujo"),
("No", "Nein", "Non", "いいえ", "Nei", "Ne", "இல்லை", "Não", "No"),
("No changes to apply.", "Keine Änderungen zum Anwenden.", "Aucune modification à appliquer.", "適用する変更はありません。", "Ingen endringer å bruke.", "Neniuj ŝanĝoj por apliki.", "பயன்படுத்த மாற்றங்கள் இல்லை.", "Nenhuma alteração para aplicar.", "No hay cambios para aplicar."),
("No changes to revert.", "Keine Änderungen zum Zurücksetzen.", "Aucune modification à annuler.", "元に戻す変更はありません。", "Ingen endringer å tilbakestille.", "Neniuj ŝanĝoj por malfari.", "மீட்டெடுக்க மாற்றங்கள் இல்லை.", "Nenhuma alteração para reverter.", "No hay cambios para revertir."),
("Normalization disabled.", "Normalisierung deaktiviert.", "Normalisation désactivée.", "正規化を無効にしました。", "Normalisering deaktivert.", "Normaligo malŝaltita.", "சீரமைப்பு முடக்கப்பட்டது.", "Normalização desativada.", "Normalización desactivada."),
("Normalization enabled.", "Normalisierung aktiviert.", "Normalisation activée.", "正規化を有効にしました。", "Normalisering aktivert.", "Normaligo ŝaltita.", "சீரமைப்பு இயக்கப்பட்டது.", "Normalização ativada.", "Normalización activada."),
("Normalize", "Normalisieren", "Normaliser", "正規化", "Normaliser", "Normaligi", "சீரமை", "Normalizar", "Normalizar"),
("Notes", "Notizen", "Notes", "メモ", "Notater", "Notoj", "குறிப்புகள்", "Notas", "Notas"),
("Pattern", "Muster", "Modèle", "パターン", "Mønster", "Ŝablono", "வடிவம்", "Padrão", "Patrón"),
("Planned Changes (file->edited output)", "Geplante Änderungen (Datei->bearbeitete Ausgabe)", "Modifications prévues (fichier->sortie modifiée)", "予定された変更 (ファイル->編集後出力)", "Planlagte endringer (fil->redigert utdata)", "Planitaj ŝanĝoj (dosiero->redaktita eligo)", "திட்டமிட்ட மாற்றங்கள் (கோப்பு->திருத்திய வெளியீடு)", "Alterações planejadas (arquivo->saída editada)", "Cambios planificados (archivo->salida editada)"),
("Quality", "Qualität", "Qualité", "品質", "Kvalitet", "Kvalito", "தரம்", "Qualidade", "Calidad"),
("Quit", "Beenden", "Quitter", "終了", "Avslutt", "Eliri", "வெளியேறு", "Sair", "Salir"),
("Remove Pattern", "Muster entfernen", "Supprimer le modèle", "パターンを削除", "Fjern mønster", "Forigi ŝablonon", "வடிவத்தை நீக்கு", "Remover padrão", "Eliminar patrón"),
("Revert", "Zurücksetzen", "Annuler les modifications", "元に戻す", "Tilbakestill", "Malfari", "மீட்டு", "Reverter", "Revertir"),
("Reverted pending changes.", "Ausstehende Änderungen verworfen.", "Modifications en attente annulées.", "保留中の変更を元に戻しました。", "Ventende endringer ble tilbakestilt.", "Malfaris atendatajn ŝanĝojn.", "நிலுவையில் உள்ள மாற்றங்கள் மீட்டெடுக்கப்பட்டன.", "Alterações pendentes revertidas.", "Se revirtieron los cambios pendientes."),
("Save", "Speichern", "Enregistrer", "保存", "Lagre", "Konservi", "சேமி", "Salvar", "Guardar"),
("Season Offset", "Staffeloffset", "Décalage de saison", "シーズンオフセット", "Sesongforskyvning", "Sezona deŝovo", "சீசன் இடச்சரிவு", "Deslocamento de temporada", "Desplazamiento de temporada"),
("Select a stream first.", "Bitte zuerst einen Stream auswählen.", "Veuillez d'abord sélectionner un flux.", "まずストリームを選択してください。", "Velg en strøm først.", "Bonvolu unue elekti fluon.", "முதலில் ஒரு ஸ்ட்ரீமைத் தேர்ந்தெடுக்கவும்.", "Selecione um fluxo primeiro.", "Selecciona primero un flujo."),
("Set Default", "Als Standard setzen", "Définir par défaut", "デフォルトに設定", "Sett som standard", "Agordi kiel defaŭltan", "இயல்புநிலையாக அமை", "Definir como padrão", "Establecer como predeterminado"),
("Set Forced", "Als erzwungen setzen", "Définir comme forcé", "強制に設定", "Sett som tvungen", "Agordi kiel devigitan", "கட்டாயமாக அமை", "Definir como forçado", "Establecer como forzado"),
("Settings Screen", "Einstellungsbildschirm", "Écran des paramètres", "設定画面", "Innstillingsskjerm", "Agorda ekrano", "அமைப்புகள் திரை", "Tela de configurações", "Pantalla de ajustes"),
("Numbering Mapping", "Verschobene Staffeln", "Saisons décalées", "シフト済みシーズン", "Forskjøvne sesonger", "Ŝovitaj sezonoj", "மாற்றிய சீசன்கள்", "Temporadas deslocadas", "Temporadas desplazadas"),
("Show", "Serie", "Série", "番組", "Serie", "Serio", "தொடர்", "Série", "Serie"),
("Shows", "Serien", "Séries", "番組一覧", "Serier", "Serioj", "தொடர்கள்", "Séries", "Series"),
("Source Season", "Quellstaffel", "Saison source", "元シーズン", "Kildesesong", "Fonta sezono", "மூல சீசன்", "Temporada de origem", "Temporada de origen"),
("SrcIndex", "QuellIndex", "Index source", "元インデックス", "Kildeindeks", "Fontindekso", "மூலச் சுட்டி", "Índice de origem", "Índice origen"),
("Status", "Status", "Statut", "状態", "Status", "Stato", "நிலை", "Status", "Estado"),
("Stay", "Bleiben", "Rester", "このまま", "Bli", "Resti", "இரு", "Permanecer", "Permanecer"),
("Stream dispositions", "Stream-Dispositionen", "Dispositions des flux", "ストリーム disposition", "Strømdisposisjoner", "Fluaj dispozicioj", "ஸ்ட்ரீம் disposition-கள்", "Disposições do fluxo", "Disposiciones del flujo"),
("Stream tags", "Stream-Tags", "Balises du flux", "ストリームタグ", "Strømtagger", "Fluaj etikedoj", "ஸ்ட்ரீம் குறிச்சொற்கள்", "Tags do fluxo", "Etiquetas del flujo"),
("Streams", "Streams", "Flux", "ストリーム", "Strømmer", "Fluoj", "ஸ்ட்ரீம்கள்", "Fluxos", "Flujos"),
("SubIndex", "Unterindex", "Sous-index", "サブインデックス", "Underindeks", "Subindekso", "துணைச்சுட்டி", "Subíndice", "Subíndice"),
("Substitute", "Ersetzen", "Remplacer", "置換", "Erstatt", "Anstataŭigi", "மாற்று", "Substituir", "Sustituir"),
("Substitute pattern", "Muster ersetzen", "Remplacer le modèle", "パターンを置換", "Erstatt mønster", "Anstataŭigi ŝablonon", "வடிவத்தை மாற்று", "Substituir padrão", "Sustituir patrón"),
("Title", "Titel", "Titre", "タイトル", "Tittel", "Titolo", "தலைப்பு", "Título", "Título"),
("Type", "Typ", "Type", "タイプ", "Type", "Tipo", "வகை", "Tipo", "Tipo"),
("Unable to update selected stream.", "Ausgewählten Stream konnte nicht aktualisiert werden.", "Impossible de mettre à jour le flux sélectionné.", "選択したストリームを更新できませんでした。", "Kunne ikke oppdatere valgt strøm.", "Ne eblis ĝisdatigi la elektitan fluon.", "தேர்ந்தெடுக்கப்பட்ட ஸ்ட்ரீமைப் புதுப்பிக்க முடியவில்லை.", "Não foi possível atualizar o fluxo selecionado.", "No se pudo actualizar el flujo seleccionado."),
("Up", "Hoch", "Monter", "上へ", "Opp", "Supren", "மேல்", "Cima", "Arriba"),
("Update Pattern", "Muster aktualisieren", "Mettre à jour le modèle", "パターンを更新", "Oppdater mønster", "Ĝisdatigi ŝablonon", "வடிவத்தை புதுப்பி", "Atualizar padrão", "Actualizar patrón"),
("Updated media tag {tag!r}.", "Medien-Tag {tag!r} aktualisiert.", "Balise média {tag!r} mise à jour.", "メディアタグ {tag!r} を更新しました。", "Mediataggen {tag!r} ble oppdatert.", "Ĝisdatigis la aŭdvidan etikedon {tag!r}.", "மீடியா குறிச்சொல் {tag!r} புதுப்பிக்கப்பட்டது.", "Tag de mídia {tag!r} atualizada.", "Etiqueta de medios {tag!r} actualizada."),
("Updated stream #{index} ({track_type}).", "Stream #{index} ({track_type}) aktualisiert.", "Flux #{index} ({track_type}) mis à jour.", "ストリーム #{index} ({track_type}) を更新しました。", "Strøm #{index} ({track_type}) oppdatert.", "Ĝisdatigis fluon #{index} ({track_type}).", "ஸ்ட்ரீம் #{index} ({track_type}) புதுப்பிக்கப்பட்டது.", "Fluxo #{index} ({track_type}) atualizado.", "Flujo #{index} ({track_type}) actualizado."),
("Value", "Wert", "Valeur", "", "Verdi", "Valoro", "மதிப்பு", "Valor", "Valor"),
("Year", "Jahr", "Année", "", "År", "Jaro", "ஆண்டு", "Ano", "Año"),
("Yes", "Ja", "Oui", "はい", "Ja", "Jes", "ஆம்", "Sim", ""),
("add media tag: key='{key}' value='{value}'", "Medien-Tag hinzufügen: Schlüssel='{key}' Wert='{value}'", "ajouter une balise média : clé='{key}' valeur='{value}'", "メディアタグを追加: key='{key}' value='{value}'", "legg til mediatagg: nøkkel='{key}' verdi='{value}'", "aldoni aŭdvidan etikedon: ŝlosilo='{key}' valoro='{value}'", "மீடியா குறிச்சொல் சேர்: key='{key}' value='{value}'", "adicionar tag de mídia: chave='{key}' valor='{value}'", "añadir etiqueta de medios: clave='{key}' valor='{value}'"),
("add {track_type} track: index={index} lang={language}", "{track_type}-Stream hinzufügen: Index={index} Sprache={language}", "ajouter une piste {track_type} : index={index} langue={language}", "{track_type}ストリームを追加: index={index} lang={language}", "legg til {track_type}-spor: indeks={index} språk={language}", "aldoni {track_type}-trakon: indekso={index} lingvo={language}", "{track_type} ஸ்ட்ரீம் சேர்: index={index} lang={language}", "adicionar faixa {track_type}: índice={index} idioma={language}", "añadir pista {track_type}: índice={index} idioma={language}"),
("audio", "Audio", "audio", "音声", "lyd", "sono", "ஒலி", "áudio", "audio"),
("attachment", "Anhang", "pièce jointe", "添付", "vedlegg", "aldonaĵo", "இணைப்பு", "anexo", "adjunto"),
("captions", "Untertitel", "sous-titres", "キャプション", "teksting", "subtekstoj", "உரைப்பதிவுகள்", "legendas", "subtítulos"),
("change media tag: key='{key}' value='{value}'", "Medien-Tag ändern: Schlüssel='{key}' Wert='{value}'", "modifier une balise média : clé='{key}' valeur='{value}'", "メディアタグを変更: key='{key}' value='{value}'", "endre mediatagg: nøkkel='{key}' verdi='{value}'", "ŝanĝi aŭdvidan etikedon: ŝlosilo='{key}' valoro='{value}'", "மீடியா குறிச்சொல் மாற்று: key='{key}' value='{value}'", "alterar tag de mídia: chave='{key}' valor='{value}'", "cambiar etiqueta de medios: clave='{key}' valor='{value}'"),
("change stream #{index} ({track_type}:{sub_index}) add disposition={disposition}", "Stream #{index} ({track_type}:{sub_index}) Disposition hinzufügen={disposition}", "modifier le flux #{index} ({track_type}:{sub_index}) ajouter disposition={disposition}", "ストリーム #{index} ({track_type}:{sub_index}) disposition を追加={disposition}", "endre strøm #{index} ({track_type}:{sub_index}) legg til disposisjon={disposition}", "ŝanĝi fluon #{index} ({track_type}:{sub_index}) aldoni dispozicion={disposition}", "ஸ்ட்ரீம் #{index} ({track_type}:{sub_index}) disposition சேர்={disposition}", "alterar fluxo #{index} ({track_type}:{sub_index}) adicionar disposição={disposition}", "cambiar flujo #{index} ({track_type}:{sub_index}) añadir disposición={disposition}"),
("change stream #{index} ({track_type}:{sub_index}) add key={key} value={value}", "Stream #{index} ({track_type}:{sub_index}) Schlüssel hinzufügen={key} Wert={value}", "modifier le flux #{index} ({track_type}:{sub_index}) ajouter clé={key} valeur={value}", "ストリーム #{index} ({track_type}:{sub_index}) key を追加={key} value={value}", "endre strøm #{index} ({track_type}:{sub_index}) legg til nøkkel={key} verdi={value}", "ŝanĝi fluon #{index} ({track_type}:{sub_index}) aldoni ŝlosilon={key} valoron={value}", "ஸ்ட்ரீம் #{index} ({track_type}:{sub_index}) key சேர்={key} value={value}", "alterar fluxo #{index} ({track_type}:{sub_index}) adicionar chave={key} valor={value}", "cambiar flujo #{index} ({track_type}:{sub_index}) añadir clave={key} valor={value}"),
("change stream #{index} ({track_type}:{sub_index}) change key={key} value={value}", "Stream #{index} ({track_type}:{sub_index}) Schlüssel ändern={key} Wert={value}", "modifier le flux #{index} ({track_type}:{sub_index}) changer clé={key} valeur={value}", "ストリーム #{index} ({track_type}:{sub_index}) key を変更={key} value={value}", "endre strøm #{index} ({track_type}:{sub_index}) endre nøkkel={key} verdi={value}", "ŝanĝi fluon #{index} ({track_type}:{sub_index}) ŝanĝi ŝlosilon={key} valoron={value}", "ஸ்ட்ரீம் #{index} ({track_type}:{sub_index}) key மாற்று={key} value={value}", "alterar fluxo #{index} ({track_type}:{sub_index}) alterar chave={key} valor={value}", "cambiar flujo #{index} ({track_type}:{sub_index}) cambiar clave={key} valor={value}"),
("change stream #{index} ({track_type}:{sub_index}) remove disposition={disposition}", "Stream #{index} ({track_type}:{sub_index}) Disposition entfernen={disposition}", "modifier le flux #{index} ({track_type}:{sub_index}) supprimer disposition={disposition}", "ストリーム #{index} ({track_type}:{sub_index}) disposition を削除={disposition}", "endre strøm #{index} ({track_type}:{sub_index}) fjern disposisjon={disposition}", "ŝanĝi fluon #{index} ({track_type}:{sub_index}) forigi dispozicion={disposition}", "ஸ்ட்ரீம் #{index} ({track_type}:{sub_index}) disposition நீக்கு={disposition}", "alterar fluxo #{index} ({track_type}:{sub_index}) remover disposição={disposition}", "cambiar flujo #{index} ({track_type}:{sub_index}) quitar disposición={disposition}"),
("change stream #{index} ({track_type}:{sub_index}) remove key={key} value={value}", "Stream #{index} ({track_type}:{sub_index}) Schlüssel entfernen={key} Wert={value}", "modifier le flux #{index} ({track_type}:{sub_index}) supprimer clé={key} valeur={value}", "ストリーム #{index} ({track_type}:{sub_index}) key を削除={key} value={value}", "endre strøm #{index} ({track_type}:{sub_index}) fjern nøkkel={key} verdi={value}", "ŝanĝi fluon #{index} ({track_type}:{sub_index}) forigi ŝlosilon={key} valoron={value}", "ஸ்ட்ரீம் #{index} ({track_type}:{sub_index}) key நீக்கு={key} value={value}", "alterar fluxo #{index} ({track_type}:{sub_index}) remover chave={key} valor={value}", "cambiar flujo #{index} ({track_type}:{sub_index}) quitar clave={key} valor={value}"),
("clean_effects", "Nur Effekte", "effets seuls", "効果音のみ", "bare effekter", "nur efektoj", "ஒலி விளைவுகள் மட்டும்", "apenas efeitos", "solo efectos"),
("comment", "Kommentar", "commentaire", "コメント", "kommentar", "komento", "கருத்துரை", "comentário", "comentario"),
("default", "Standard", "par défaut", "デフォルト", "standard", "defaŭlta", "இயல்புநிலை", "padrão", "predeterminado"),
("dependent", "abhängig", "dépendant", "依存", "avhengig", "dependa", "சார்ந்த", "dependente", "dependiente"),
("descriptions", "Beschreibungen", "descriptions", "解説", "beskrivelser", "priskriboj", "விளக்கங்கள்", "descrições", "descripciones"),
("differences", "Unterschiede", "différences", "差分", "forskjeller", "diferencoj", "வேறுபாடுகள்", "diferenças", "diferencias"),
("dub", "Synchronisiert", "doublage", "吹替", "dubbet", "dublado", "டப்", "dublado", "doblaje"),
("for pattern", "für Muster", "pour le modèle", "パターン用", "for mønster", "por ŝablono", "வடிவத்திற்கு", "para o padrão", "para el patrón"),
("forced", "erzwungen", "forcé", "強制", "tvungen", "devigita", "கட்டாயம்", "forçado", "forzado"),
("from", "von", "de", "", "fra", "de", "இருந்து", "de", "de"),
("from pattern", "aus Muster", "depuis le modèle", "パターンから", "fra mønster", "de ŝablono", "வடிவத்திலிருந்து", "do padrão", "del patrón"),
("from show", "aus Serie", "depuis la série", "番組から", "fra serie", "el serio", "தொடரிலிருந்து", "da série", "de la serie"),
("hearing_impaired", "hörgeschädigt", "malentendants", "聴覚障害者向け", "hørselshemmet", "aŭdmalhelpita", "கேள்வித்திறன் குறைபாடு", "deficiência auditiva", "personas con discapacidad auditiva"),
("karaoke", "Karaoke", "karaoké", "カラオケ", "karaoke", "karaokeo", "கரோக்கே", "karaokê", "karaoke"),
("lyrics", "Liedtext", "paroles", "歌詞", "sangtekst", "kantoteksto", "பாடல்வரிகள்", "letra", "letra"),
("metadata", "Metadaten", "métadonnées", "メタデータ", "metadata", "metadatenoj", "மெட்டாடேட்டா", "metadados", "metadatos"),
("non_diegetic", "nicht-diegetisch", "non diégétique", "非ダイジェティック", "ikke-diegetisk", "nediĝeta", "அல்லாத-டைஜெடிக்", "não diegético", "no diegético"),
("original", "Original", "original", "オリジナル", "original", "originala", "மூலம்", "original", "original"),
("pattern #{id}", "Muster #{id}", "modèle #{id}", "パターン #{id}", "mønster #{id}", "ŝablono #{id}", "வடிவு #{id}", "padrão #{id}", "patrón #{id}"),
("remove media tag: key='{key}' value='{value}'", "Medien-Tag entfernen: Schlüssel='{key}' Wert='{value}'", "supprimer une balise média : clé='{key}' valeur='{value}'", "メディアタグを削除: key='{key}' value='{value}'", "fjern mediatagg: nøkkel='{key}' verdi='{value}'", "forigi aŭdvidan etikedon: ŝlosilo='{key}' valoro='{value}'", "மீடியா குறிச்சொல் நீக்கு: key='{key}' value='{value}'", "remover tag de mídia: chave='{key}' valor='{value}'", "eliminar etiqueta de medios: clave='{key}' valor='{value}'"),
("remove stream #{index}", "Stream #{index} entfernen", "supprimer le flux #{index}", "ストリーム #{index} を削除", "fjern strøm #{index}", "forigi fluon #{index}", "ஸ்ட்ரீம் #{index} நீக்கு", "remover fluxo #{index}", "eliminar flujo #{index}"),
("show #{id}", "Serie #{id}", "série #{id}", "番組 #{id}", "serie #{id}", "serio #{id}", "தொடர் #{id}", "série #{id}", "serie #{id}"),
("stereo", "Stereo", "stéréo", "ステレオ", "stereo", "stereo", "ஸ்டீரியோ", "estéreo", "estéreo"),
("still_image", "Standbild", "image fixe", "静止画", "stillbilde", "senmova bildo", "நிலைப்படம்", "imagem estática", "imagen fija"),
("sub index", "Unterindex", "sous-index", "サブインデックス", "underindeks", "subindekso", "துணைச்சுட்டி", "subíndice", "subíndice"),
("subtitle", "Untertitel", "sous-titre", "字幕", "undertekst", "subtitolo", "வசனம்", "legenda", "subtítulo"),
("timed_thumbnails", "zeitgesteuerte Vorschaubilder", "miniatures horodatées", "時間指定サムネイル", "tidsbestemte miniatyrer", "tempigitaj bildetoj", "நேர நிர்ணய சிறுபடங்கள்", "miniaturas temporizadas", "miniaturas temporizadas"),
("undefined", "undefiniert", "indéfini", "未定義", "udefinert", "nedifinita", "வரையறுக்கப்படாத", "indefinido", "indefinido"),
("unknown", "unbekannt", "inconnu", "不明", "ukjent", "nekonata", "தெரியாத", "desconhecido", "desconocido"),
("video", "Video", "vidéo", "映像", "video", "video", "வீடியோ", "vídeo", "vídeo"),
("visual_impaired", "sehgeschädigt", "malvoyants", "視覚障害者向け", "synshemmet", "vidmalhelpita", "பார்வைத்திறன் குறைபாடு", "deficiência visual", "personas con discapacidad visual"),
]
def extract_source_phrases() -> set[str]:
phrases: set[str] = set()
for path in SOURCE_ROOT.rglob("*.py"):
tree = ast.parse(path.read_text(encoding="utf-8"))
for node in ast.walk(tree):
if not isinstance(node, ast.Call) or not node.args:
continue
func = node.func
func_name = (
func.id if isinstance(func, ast.Name) else func.attr if isinstance(func, ast.Attribute) else None
)
if func_name != "t":
continue
first_arg = node.args[0]
if isinstance(first_arg, ast.Constant) and isinstance(first_arg.value, str):
phrases.add(first_arg.value)
return phrases
def build_translation_rows() -> dict[str, dict[str, str]]:
rows: dict[str, dict[str, str]] = {}
for row in PHRASE_ROWS:
source_phrase = row[0]
rows[source_phrase] = {
language_code: translated
for language_code, translated in zip(TRANSLATED_LANGUAGE_CODES, row[1:])
}
return rows
def build_iso_language_catalog(language_code: str) -> dict[str, str]:
if language_code == "en":
return {
language.name: str(language.value["name"])
for language in IsoLanguage
}
translation = gettext.translation("iso_639", languages=[language_code], fallback=True)
iso_catalog: dict[str, str] = {}
for language in IsoLanguage:
english_name = str(language.value["name"])
translated = translation.gettext(english_name)
iso_catalog[language.name] = translated if translated else english_name
return iso_catalog
def main() -> None:
source_phrases = extract_source_phrases() | EXTRA_PHRASES
row_translations = build_translation_rows()
OUTPUT_ROOT.mkdir(parents=True, exist_ok=True)
for language_code in LANGUAGE_CODES:
if language_code not in SUPPORTED_LANGUAGES:
raise ValueError(f"Unsupported language code: {language_code}")
phrases_catalog = {
phrase: (
row_translations.get(phrase, {}).get(language_code, phrase)
if language_code != "en"
else phrase
)
for phrase in sorted(source_phrases)
}
catalog = {
"phrases": phrases_catalog,
"iso_languages": build_iso_language_catalog(language_code),
}
(OUTPUT_ROOT / f"{language_code}.json").write_text(
json.dumps(catalog, ensure_ascii=False, indent=2, sort_keys=True) + "\n",
encoding="utf-8",
)
if __name__ == "__main__":
main()

View File

@@ -1,386 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
DEV_BRANCH="dev"
MAIN_BRANCH="main"
ORIGIN_REMOTE="origin"
DEFAULT_AGENT_DEVELOPMENT_PATHS=(
"AGENTS.md"
"SCRATCHPAD.md"
"guidance"
"requirements"
"prompts"
"process"
"tools/merge_dev_into_main.sh"
)
AGENT_DEVELOPMENT_PATHS=("${DEFAULT_AGENT_DEVELOPMENT_PATHS[@]}")
CURRENT_BRANCH="${DEV_BRANCH}"
ASSUME_YES=0
DRY_RUN=0
SKIP_TESTS=0
usage() {
cat <<EOF
Usage: $(basename "$0") [--yes] [--dry-run] [--skip-tests] [--help]
Merge the local ${DEV_BRANCH} branch into ${MAIN_BRANCH}, remove agent-development files
from ${MAIN_BRANCH}, auto-resolve merge conflicts limited to those cleanup paths,
create a release merge commit and tag, push to ${ORIGIN_REMOTE}/${MAIN_BRANCH}, and
switch back to ${DEV_BRANCH}.
Options:
--yes Skip the interactive confirmation prompt.
--dry-run Print the validated release plan without changing git state.
--skip-tests Skip the default pre-release test gate (./tools/test.sh).
--help Show this help text.
Environment overrides:
FFX_RELEASE_CLEAN_PATHS Colon-separated path list to remove from ${MAIN_BRANCH}
after merging ${DEV_BRANCH}. Defaults to:
${DEFAULT_AGENT_DEVELOPMENT_PATHS[*]}
EOF
}
fail() {
printf '%s\n' "$*" >&2
exit 1
}
cleanup() {
local exit_code="$1"
trap - EXIT
if git rev-parse -q --verify MERGE_HEAD >/dev/null 2>&1; then
printf 'Merge is incomplete; aborting merge on %s...\n' "${CURRENT_BRANCH}" >&2
git merge --abort >/dev/null 2>&1 || true
fi
if [ "${CURRENT_BRANCH}" != "${DEV_BRANCH}" ]; then
printf 'Switching back to %s...\n' "${DEV_BRANCH}" >&2
git switch "${DEV_BRANCH}" >/dev/null 2>&1 || true
CURRENT_BRANCH="${DEV_BRANCH}"
fi
exit "${exit_code}"
}
load_cleanup_paths() {
if [ -n "${FFX_RELEASE_CLEAN_PATHS:-}" ]; then
IFS=':' read -r -a AGENT_DEVELOPMENT_PATHS <<< "${FFX_RELEASE_CLEAN_PATHS}"
fi
if [ "${#AGENT_DEVELOPMENT_PATHS[@]}" -eq 0 ]; then
fail "Release cleanup path list is empty."
fi
}
path_is_cleanup_target() {
local candidate_path="$1"
local cleanup_path=""
for cleanup_path in "${AGENT_DEVELOPMENT_PATHS[@]}"; do
case "${candidate_path}" in
"${cleanup_path}"|"${cleanup_path}"/*)
return 0
;;
esac
done
return 1
}
auto_resolve_cleanup_conflicts() {
local unmerged_paths=()
local non_cleanup_conflicts=()
local remaining_conflicts=()
local conflicted_path=""
mapfile -t unmerged_paths < <(git diff --name-only --diff-filter=U)
if [ "${#unmerged_paths[@]}" -eq 0 ]; then
return 1
fi
for conflicted_path in "${unmerged_paths[@]}"; do
if ! path_is_cleanup_target "${conflicted_path}"; then
non_cleanup_conflicts+=("${conflicted_path}")
fi
done
if [ "${#non_cleanup_conflicts[@]}" -ne 0 ]; then
printf 'Merge produced non-cleanup conflicts:\n' >&2
for conflicted_path in "${non_cleanup_conflicts[@]}"; do
printf ' - %s\n' "${conflicted_path}" >&2
done
return 1
fi
printf 'Auto-resolving merge conflicts for release-cleanup paths:\n'
for conflicted_path in "${unmerged_paths[@]}"; do
printf ' - %s\n' "${conflicted_path}"
done
git rm -r -f --ignore-unmatch "${AGENT_DEVELOPMENT_PATHS[@]}" >/dev/null
mapfile -t remaining_conflicts < <(git diff --name-only --diff-filter=U)
if [ "${#remaining_conflicts[@]}" -ne 0 ]; then
printf 'Cleanup conflict auto-resolution left unresolved paths:\n' >&2
for conflicted_path in "${remaining_conflicts[@]}"; do
printf ' - %s\n' "${conflicted_path}" >&2
done
return 1
fi
return 0
}
require_repo_state() {
if ! git rev-parse --show-toplevel >/dev/null 2>&1; then
fail "This helper must be run inside a git repository."
fi
if ! git show-ref --verify --quiet "refs/heads/${DEV_BRANCH}"; then
fail "Local branch '${DEV_BRANCH}' does not exist."
fi
if ! git show-ref --verify --quiet "refs/heads/${MAIN_BRANCH}"; then
fail "Local branch '${MAIN_BRANCH}' does not exist."
fi
if ! git remote get-url "${ORIGIN_REMOTE}" >/dev/null 2>&1; then
fail "Remote '${ORIGIN_REMOTE}' is not configured."
fi
}
require_dev_checkout() {
CURRENT_BRANCH="$(git rev-parse --abbrev-ref HEAD)"
if [ "${CURRENT_BRANCH}" != "${DEV_BRANCH}" ]; then
fail "Current branch is '${CURRENT_BRANCH}', but '${DEV_BRANCH}' is required."
fi
}
require_clean_worktree() {
if [ -n "$(git status --porcelain)" ]; then
fail "Local '${DEV_BRANCH}' branch is dirty. Commit, stash, or clean changes first."
fi
}
fetch_remote_state() {
printf 'Fetching %s branch and tag state...\n' "${ORIGIN_REMOTE}"
git fetch "${ORIGIN_REMOTE}" "${DEV_BRANCH}" "${MAIN_BRANCH}" --tags >/dev/null
}
require_branch_matches_remote() {
local branch="$1"
local local_sha=""
local remote_sha=""
if ! git show-ref --verify --quiet "refs/remotes/${ORIGIN_REMOTE}/${branch}"; then
fail "Remote branch '${ORIGIN_REMOTE}/${branch}' does not exist."
fi
local_sha="$(git rev-parse "refs/heads/${branch}")"
remote_sha="$(git rev-parse "refs/remotes/${ORIGIN_REMOTE}/${branch}")"
if [ "${local_sha}" != "${remote_sha}" ]; then
fail "Local branch '${branch}' is not up to date with '${ORIGIN_REMOTE}/${branch}'. Pull, rebase, or push first."
fi
}
resolve_release_version() {
local version_from_pyproject=""
local version_from_constants=""
version_from_pyproject="$(
sed -n 's/^version = "\(.*\)"$/\1/p' pyproject.toml | head -n 1
)"
version_from_constants="$(
sed -n "s/^VERSION='\(.*\)'$/\1/p" src/ffx/constants.py | head -n 1
)"
if [ -z "${version_from_pyproject}" ]; then
fail "Could not resolve release version from pyproject.toml."
fi
if [ -z "${version_from_constants}" ]; then
fail "Could not resolve release version from src/ffx/constants.py."
fi
if [ "${version_from_pyproject}" != "${version_from_constants}" ]; then
fail "Version mismatch: pyproject.toml=${version_from_pyproject}, src/ffx/constants.py=${version_from_constants}."
fi
printf '%s\n' "${version_from_pyproject}"
}
require_release_tag_available() {
local release_version="$1"
local release_tag="v${release_version}"
if git rev-parse -q --verify "refs/tags/${release_tag}" >/dev/null 2>&1; then
fail "Tag '${release_tag}' already exists."
fi
if git rev-parse -q --verify "refs/tags/${release_version}" >/dev/null 2>&1; then
fail "Bare tag '${release_version}' already exists; refusing to create ambiguous release tags."
fi
}
run_pre_release_tests() {
if [ "${SKIP_TESTS}" -eq 1 ]; then
printf 'Skipping pre-release tests.\n'
return 0
fi
if [ ! -x "./tools/test.sh" ]; then
fail "Missing executable test runner at ./tools/test.sh."
fi
printf 'Running pre-release tests via ./tools/test.sh...\n'
./tools/test.sh
}
print_release_plan() {
local release_version="$1"
local release_tag="v${release_version}"
local release_commit_message="Release ${release_tag}"
printf 'Dry run only. Planned steps:\n'
printf '1. Ensure current branch is %s and the worktree is clean.\n' "${DEV_BRANCH}"
printf '2. Fetch %s and verify local %s and %s exactly match %s/%s and %s/%s.\n' \
"${ORIGIN_REMOTE}" \
"${DEV_BRANCH}" \
"${MAIN_BRANCH}" \
"${ORIGIN_REMOTE}" \
"${DEV_BRANCH}" \
"${ORIGIN_REMOTE}" \
"${MAIN_BRANCH}"
if [ "${SKIP_TESTS}" -eq 1 ]; then
printf '3. Skip the pre-release test gate.\n'
else
printf '3. Run ./tools/test.sh as the pre-release test gate.\n'
fi
printf '4. Switch to %s and merge %s with --no-ff --no-commit.\n' "${MAIN_BRANCH}" "${DEV_BRANCH}"
printf '5. Auto-resolve merge conflicts limited to release-cleanup paths and remove them from %s:\n' "${MAIN_BRANCH}"
local cleanup_path=""
for cleanup_path in "${AGENT_DEVELOPMENT_PATHS[@]}"; do
printf ' - %s\n' "${cleanup_path}"
done
printf '6. Create merge commit: %s\n' "${release_commit_message}"
printf '7. Create annotated tag: %s\n' "${release_tag}"
printf '8. Push %s to %s/%s with --follow-tags.\n' "${MAIN_BRANCH}" "${ORIGIN_REMOTE}" "${MAIN_BRANCH}"
printf '9. Switch back to %s.\n' "${DEV_BRANCH}"
}
trap 'cleanup $?' EXIT
while [ "$#" -gt 0 ]; do
case "$1" in
--yes)
ASSUME_YES=1
;;
--dry-run)
DRY_RUN=1
;;
--skip-tests)
SKIP_TESTS=1
;;
--help|-h)
usage
exit 0
;;
*)
usage >&2
fail "Unknown option: $1"
;;
esac
shift
done
load_cleanup_paths
require_repo_state
require_dev_checkout
require_clean_worktree
fetch_remote_state
require_branch_matches_remote "${DEV_BRANCH}"
require_branch_matches_remote "${MAIN_BRANCH}"
RELEASE_VERSION="$(resolve_release_version)"
RELEASE_TAG="v${RELEASE_VERSION}"
RELEASE_COMMIT_MESSAGE="Release ${RELEASE_TAG}"
require_release_tag_available "${RELEASE_VERSION}"
printf 'This will merge %s into %s, remove agent-development files on %s,\n' "${DEV_BRANCH}" "${MAIN_BRANCH}" "${MAIN_BRANCH}"
printf 'auto-resolve cleanup-path conflicts, run the pre-release gate%s, create %s,\n' \
"$([ "${SKIP_TESTS}" -eq 1 ] && printf ' (skipped)' || printf '')" \
"${RELEASE_TAG}"
printf 'push to %s/%s, and switch back to %s.\n' \
"${ORIGIN_REMOTE}" \
"${MAIN_BRANCH}" \
"${DEV_BRANCH}"
if [ "${ASSUME_YES}" -ne 1 ]; then
printf 'Are you sure? [y/N] '
read -r confirmation
case "${confirmation}" in
y|Y|yes|YES)
;;
*)
fail "Aborted by user."
;;
esac
fi
if [ "${DRY_RUN}" -eq 1 ]; then
print_release_plan "${RELEASE_VERSION}"
exit 0
fi
run_pre_release_tests
require_clean_worktree
fetch_remote_state
require_branch_matches_remote "${DEV_BRANCH}"
require_branch_matches_remote "${MAIN_BRANCH}"
require_release_tag_available "${RELEASE_VERSION}"
git switch "${MAIN_BRANCH}" >/dev/null
CURRENT_BRANCH="${MAIN_BRANCH}"
printf 'Merging %s into %s...\n' "${DEV_BRANCH}" "${MAIN_BRANCH}"
if ! git merge --no-ff --no-commit "${DEV_BRANCH}"; then
if ! auto_resolve_cleanup_conflicts; then
fail "Merge from '${DEV_BRANCH}' into '${MAIN_BRANCH}' failed."
fi
fi
if ! git rev-parse -q --verify MERGE_HEAD >/dev/null 2>&1; then
fail "'${MAIN_BRANCH}' is already up to date with '${DEV_BRANCH}'. Nothing to merge."
fi
printf 'Removing agent-development files from %s...\n' "${MAIN_BRANCH}"
git rm -r -f --ignore-unmatch "${AGENT_DEVELOPMENT_PATHS[@]}" >/dev/null
if git diff --cached --quiet; then
fail "No staged changes are present after merging '${DEV_BRANCH}' into '${MAIN_BRANCH}'."
fi
printf 'Creating release merge commit: %s\n' "${RELEASE_COMMIT_MESSAGE}"
git commit -m "${RELEASE_COMMIT_MESSAGE}"
printf 'Creating annotated tag: %s\n' "${RELEASE_TAG}"
git tag -a "${RELEASE_TAG}" -m "FFX ${RELEASE_VERSION}"
printf 'Pushing %s and annotated tags to %s...\n' "${MAIN_BRANCH}" "${ORIGIN_REMOTE}"
git push "${ORIGIN_REMOTE}" "${MAIN_BRANCH}" --follow-tags
printf 'Switching back to %s...\n' "${DEV_BRANCH}"
git switch "${DEV_BRANCH}" >/dev/null
CURRENT_BRANCH="${DEV_BRANCH}"
printf 'Release merge complete: %s pushed to %s/%s and tagged as %s.\n' \
"${RELEASE_COMMIT_MESSAGE}" \
"${ORIGIN_REMOTE}" \
"${MAIN_BRANCH}" \
"${RELEASE_TAG}"