Compare commits
56 Commits
2eeea08be0
...
season-shi
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
008c643272 | ||
|
|
c302b30e63 | ||
|
|
7926407534 | ||
|
|
0894ac2fab | ||
|
|
353759b983 | ||
|
|
454f5f0656 | ||
|
|
0e51d6337f | ||
|
|
a24b6dedaa | ||
|
|
8361fc536b | ||
|
|
4d4272e5e8 | ||
|
|
559869ca68 | ||
|
|
0e4fae538b | ||
|
|
2595bfe4f4 | ||
|
|
fc9d94aeee | ||
|
|
111df11199 | ||
|
|
f0d4c36bc3 | ||
|
|
ef0d6e9274 | ||
|
|
d05b01cfb2 | ||
|
|
9dc08d48e9 | ||
|
|
20bdfc0dd7 | ||
|
|
4365e083dc | ||
|
|
528915a235 | ||
|
|
9a980b5766 | ||
|
|
5eee7e1161 | ||
|
|
0a41998e29 | ||
|
|
ebdc23c3ce | ||
|
|
9611930949 | ||
|
|
609f93b783 | ||
|
|
52c6462fa8 | ||
|
|
358ef18f77 | ||
|
|
fc729a2414 | ||
|
|
0939a0c6c2 | ||
|
|
c384d54c12 | ||
|
|
71553aad32 | ||
|
|
d19e69990a | ||
|
|
be0f4b4c4e | ||
|
|
01b5fdb289 | ||
|
|
60ae58500a | ||
|
|
f9c8b8ac5e | ||
|
|
5871ae30ad | ||
|
|
52724ecc5b | ||
|
|
f288d445e4 | ||
|
|
d9db6da191 | ||
|
|
5443881ea1 | ||
|
|
8946b57456 | ||
|
|
686239491b | ||
|
|
126ba4487c | ||
|
|
447cda19ef | ||
|
|
f1ba913a98 | ||
|
|
59336aafb7 | ||
|
|
fd5ad3ed56 | ||
|
|
2d03a3bb10 | ||
|
|
4dc02d52a2 | ||
|
|
ed0cea9c26 | ||
|
|
15bfbdbe88 | ||
|
|
c354ba09ba |
15
.gitignore
vendored
15
.gitignore
vendored
@@ -1,4 +1,5 @@
|
|||||||
__pycache__
|
__pycache__/
|
||||||
|
*.py[cod]
|
||||||
junk/
|
junk/
|
||||||
.vscode
|
.vscode
|
||||||
.ipynb_checkpoints/
|
.ipynb_checkpoints/
|
||||||
@@ -8,5 +9,15 @@ tools/ansible/inventory/cappuccino.yml
|
|||||||
tools/ansible/inventory/group_vars/all.yml
|
tools/ansible/inventory/group_vars/all.yml
|
||||||
ffx_test_report.log
|
ffx_test_report.log
|
||||||
bin/conversiontest.py
|
bin/conversiontest.py
|
||||||
*.egg-info/
|
|
||||||
|
|
||||||
|
build/
|
||||||
|
dist/
|
||||||
|
*.egg-info/
|
||||||
|
.venv/
|
||||||
|
venv/
|
||||||
|
.codex
|
||||||
|
|
||||||
|
|
||||||
|
*.mkv
|
||||||
|
*.webm
|
||||||
|
ffmpeg2pass-0.log
|
||||||
|
|||||||
376
AGENTS.md
Normal file
376
AGENTS.md
Normal file
@@ -0,0 +1,376 @@
|
|||||||
|
# AGENTS.md
|
||||||
|
|
||||||
|
This file is the entry point for agent guidance in this repository.
|
||||||
|
|
||||||
|
It is intentionally generic and reusable across projects. Keep this file focused on non-project-specific constraints, working style, and the structure used to link more detailed guidance.
|
||||||
|
|
||||||
|
# Purpose
|
||||||
|
|
||||||
|
- Provide a small default rule set for agents working in this repository.
|
||||||
|
- Keep the base guidance modular and easy to extend.
|
||||||
|
- Separate reusable agent behavior from project-specific requirements.
|
||||||
|
|
||||||
|
# Comment Syntax
|
||||||
|
|
||||||
|
- A segment wrapped in `<!--` and `-->` is a comment and must be ignored by agents.
|
||||||
|
- Use HTML comments for optional guidance that should stay inactive until enabled.
|
||||||
|
- To enable an optional segment, remove the surrounding `<!--` and `-->` markers.
|
||||||
|
|
||||||
|
# Core Principles
|
||||||
|
|
||||||
|
- Prefer the simplest solution that satisfies the current goal.
|
||||||
|
- Keep guidance lightweight: only add detail when it meaningfully improves outcomes.
|
||||||
|
- Reuse modular guideline files instead of expanding this file indefinitely.
|
||||||
|
- Treat project-specific documents as the source of truth for project behavior.
|
||||||
|
- When guidance conflicts, use the most specific applicable document.
|
||||||
|
|
||||||
|
# Rule Terms
|
||||||
|
|
||||||
|
- A `rule` is the general term for any constraint, requirement, definition, or similar guidance item.
|
||||||
|
- A `rule set` addresses all rules inside one file that share the same rule set ID.
|
||||||
|
- Any rule inside a rule set shall use an ID following the schema `RULESET-0001`, `RULESET-0002`, and so on.
|
||||||
|
- Rules without a rule set ID are also valid, but they are not addressable by rule ID.
|
||||||
|
|
||||||
|
# Scope Of This File
|
||||||
|
|
||||||
|
This file should contain:
|
||||||
|
|
||||||
|
- Generic agent behavior and constraints.
|
||||||
|
- Rules that are reusable across multiple projects.
|
||||||
|
- Links to optional guideline modules.
|
||||||
|
- Links to project-specific requirements.
|
||||||
|
- Commented optional templates for released-product documentation and agent-output locations.
|
||||||
|
|
||||||
|
This file should not contain:
|
||||||
|
|
||||||
|
- Project business requirements.
|
||||||
|
- Project architecture decisions.
|
||||||
|
- Stack-specific implementation details unless they are universally applicable.
|
||||||
|
- Task-specific runbooks that belong in dedicated modules.
|
||||||
|
|
||||||
|
# Default Agent Behavior
|
||||||
|
|
||||||
|
- Read the relevant context before making changes.
|
||||||
|
- Prefer small, understandable edits over broad refactors.
|
||||||
|
- Preserve existing patterns unless there is a clear reason to change them.
|
||||||
|
- Document assumptions when context is missing.
|
||||||
|
- Ignore HTML comment segments.
|
||||||
|
- If a more specific enabled guideline exists for the current task, follow it.
|
||||||
|
|
||||||
|
# Guideline Structure
|
||||||
|
|
||||||
|
Use the following structure for reusable guidance files and project-specific documentation as needed:
|
||||||
|
|
||||||
|
```text
|
||||||
|
/
|
||||||
|
|-- AGENTS.md
|
||||||
|
|-- guidance/
|
||||||
|
| |-- stacks/
|
||||||
|
| |-- conventions/
|
||||||
|
| `-- workflows/
|
||||||
|
|-- prompts/
|
||||||
|
`-- requirements/
|
||||||
|
|
||||||
|
Optional files and directories
|
||||||
|
|-- SCRATCHPAD.md
|
||||||
|
|-- docs/
|
||||||
|
| |-- readme.md
|
||||||
|
| |-- installation.md
|
||||||
|
| `-- history.md
|
||||||
|
|-- process/
|
||||||
|
| |-- log.md
|
||||||
|
| `-- coding-handbook.md
|
||||||
|
```
|
||||||
|
|
||||||
|
# Optional Reusable Modules
|
||||||
|
|
||||||
|
Add files under `guidance/` only when they are needed.
|
||||||
|
|
||||||
|
# Optional Scratchpad
|
||||||
|
|
||||||
|
- `SCRATCHPAD.md` is an optional repo-root scratchpad for temporary
|
||||||
|
information aimed at the next iteration.
|
||||||
|
- Developers may create or delete `SCRATCHPAD.md` at any time.
|
||||||
|
- Developers may refer to `SCRATCHPAD.md` as `scratchpad` when giving agents a
|
||||||
|
source or target for information.
|
||||||
|
- Agents may read, update, create, or remove the scratchpad when the task
|
||||||
|
explicitly calls for it.
|
||||||
|
- Treat the scratchpad as low-formality working context rather than canonical
|
||||||
|
project truth.
|
||||||
|
- Use the scratchpad for short-lived notes, open questions, sketches, and
|
||||||
|
temporary decisions that should be resolved away.
|
||||||
|
- Move durable outcomes into `requirements/`, `guidance/`, code, tests, or
|
||||||
|
another long-lived location.
|
||||||
|
- If `SCRATCHPAD.md` is absent, agents should continue normally.
|
||||||
|
|
||||||
|
# Optional Rule Sets
|
||||||
|
|
||||||
|
- Optional rule sets may be stored in `guidance/optional/` or in `guidance/{section}/optional/`.
|
||||||
|
- Optional rule sets are inactive by default and shall only be applied when a prompt explicitly requests them, for example by phrases such as `Apply rules for lean interface iteration in the following steps.` or `Apply LII rules.`
|
||||||
|
- An optional rule set may be requested by its descriptive name, by its rule set ID, or by another equally clear explicit reference.
|
||||||
|
- Agents shall never infer or auto-enable optional rule sets from general intent alone.
|
||||||
|
- If an optional rule or rule set cannot be identified and addressed clearly, agents shall stop and ask before proceeding.
|
||||||
|
|
||||||
|
# Prepared Orders
|
||||||
|
|
||||||
|
- An `order` is a prepared prompt for one isolated operation rather than a general workflow or standing rule set.
|
||||||
|
- Orders shall be stored under `prompts/`.
|
||||||
|
- Order files shall use the naming schema `ORDER-0001-<slug>.md`, `ORDER-0002-<slug>.md`, and so on.
|
||||||
|
- The canonical order identifier is the `ORDER-0001` style prefix. The trailing slug is descriptive only.
|
||||||
|
- Recommended internal order file structure is: prompt ID, prompt name, purpose, trigger examples, scope, operation, and expected output.
|
||||||
|
- Orders shall only be executed when they are explicitly requested by a prompt such as `Execute ORDER-0007.` or `Execute ORDER 7.`
|
||||||
|
- Agents may accept an unambiguous short numeric reference such as `ORDER 7` as an alias for `ORDER-0007`.
|
||||||
|
- If an order cannot be identified uniquely and clearly, agents shall stop and ask before proceeding.
|
||||||
|
|
||||||
|
# Toolstack Guides
|
||||||
|
|
||||||
|
Location:
|
||||||
|
|
||||||
|
```text
|
||||||
|
guidance/stacks/
|
||||||
|
```
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- `guidance/stacks/python.md`
|
||||||
|
- `guidance/stacks/typescript.md`
|
||||||
|
- `guidance/stacks/docker.md`
|
||||||
|
- `guidance/stacks/terraform.md`
|
||||||
|
|
||||||
|
Use for:
|
||||||
|
|
||||||
|
- Language or framework expectations.
|
||||||
|
- Tooling and environment conventions.
|
||||||
|
- Build, test, and runtime guidance tied to a specific stack.
|
||||||
|
|
||||||
|
# Coding Conventions
|
||||||
|
|
||||||
|
Location:
|
||||||
|
|
||||||
|
```text
|
||||||
|
guidance/conventions/
|
||||||
|
```
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- `guidance/conventions/naming.md`
|
||||||
|
- `guidance/conventions/testing.md`
|
||||||
|
- `guidance/conventions/review.md`
|
||||||
|
|
||||||
|
Use for:
|
||||||
|
|
||||||
|
- Naming and structure conventions.
|
||||||
|
- Testing expectations.
|
||||||
|
- Code review and quality rules.
|
||||||
|
|
||||||
|
# Recurring Workflows
|
||||||
|
|
||||||
|
Location:
|
||||||
|
|
||||||
|
```text
|
||||||
|
guidance/workflows/
|
||||||
|
```
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- `guidance/workflows/feature-delivery.md`
|
||||||
|
- `guidance/workflows/bugfix.md`
|
||||||
|
- `guidance/workflows/release.md`
|
||||||
|
- `guidance/workflows/incident-response.md`
|
||||||
|
|
||||||
|
Use for:
|
||||||
|
|
||||||
|
- Repeatable task flows.
|
||||||
|
- Checklists for common delivery work.
|
||||||
|
- Operational or maintenance procedures.
|
||||||
|
|
||||||
|
|
||||||
|
<!-- Enable this optional section by removing the outer HTML comment markers from this segment
|
||||||
|
when you want agents to create, update, and consult released-product
|
||||||
|
documentation in `docs/`.
|
||||||
|
|
||||||
|
# Released Product Documentation
|
||||||
|
|
||||||
|
Released-product documentation should live outside the generic sections above.
|
||||||
|
|
||||||
|
Recommended location:
|
||||||
|
|
||||||
|
```text
|
||||||
|
docs/
|
||||||
|
```
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- `docs/readme.md`
|
||||||
|
- `docs/installation.md`
|
||||||
|
- `docs/history.md`
|
||||||
|
|
||||||
|
Agent rules for docs output:
|
||||||
|
|
||||||
|
- Keep content compact but comprehensive.
|
||||||
|
- Write for end users, operators, or other consumers of the released product.
|
||||||
|
- Prefer shipped behavior, supported workflows, and stable terminology over
|
||||||
|
internal implementation detail.
|
||||||
|
- Keep documentation synchronized with released behavior.
|
||||||
|
- Update release history when user-visible changes are shipped.
|
||||||
|
|
||||||
|
Recommended topics:
|
||||||
|
|
||||||
|
- Product overview and intended use.
|
||||||
|
- Installation, configuration, and upgrade guidance.
|
||||||
|
- Usage patterns, operational instructions, and support boundaries.
|
||||||
|
- Compatibility notes, migration notes, and release history.
|
||||||
|
- Troubleshooting and common pitfalls when relevant. -->
|
||||||
|
|
||||||
|
|
||||||
|
<!-- Enable this optional section by removing the outer HTML comment markers from this
|
||||||
|
segment when you want agents to produce and consult workflow output in `process/`.
|
||||||
|
|
||||||
|
# Agent Output In `process/`
|
||||||
|
|
||||||
|
The `process/` directory is primarily for agent output created during
|
||||||
|
delivery, maintenance, and review work.
|
||||||
|
|
||||||
|
Recommended location:
|
||||||
|
|
||||||
|
```text
|
||||||
|
process/
|
||||||
|
```
|
||||||
|
|
||||||
|
Agent rules for process output:
|
||||||
|
|
||||||
|
- Use `process/` for agent-produced artifacts rather than released-product
|
||||||
|
documentation.
|
||||||
|
- Keep entries concise, traceable, and tied to resulting changes.
|
||||||
|
- Treat `process/` as workflow output, not as the primary source of product
|
||||||
|
truth.
|
||||||
|
- Prefer summaries and rationale over raw transcript dumps unless a workflow
|
||||||
|
explicitly requires full prompt history.
|
||||||
|
|
||||||
|
# Agent Change Log
|
||||||
|
|
||||||
|
Location:
|
||||||
|
|
||||||
|
```text
|
||||||
|
process/log.md
|
||||||
|
```
|
||||||
|
|
||||||
|
Use for:
|
||||||
|
|
||||||
|
- Capturing prompts given to agents.
|
||||||
|
- Recording concise explanations of the resulting changes made by agents.
|
||||||
|
- Preserving task-by-task rationale, decisions, and implementation notes.
|
||||||
|
|
||||||
|
# Coding Handbook
|
||||||
|
|
||||||
|
Location:
|
||||||
|
|
||||||
|
```text
|
||||||
|
process/coding-handbook.md
|
||||||
|
```
|
||||||
|
|
||||||
|
Use for:
|
||||||
|
|
||||||
|
- A tutorial-style handbook that explains the programming components used in
|
||||||
|
the project.
|
||||||
|
- Compact but comprehensive technical onboarding material for future
|
||||||
|
contributors.
|
||||||
|
- Written explanations that connect code structure, concepts, and
|
||||||
|
implementation patterns. -->
|
||||||
|
|
||||||
|
|
||||||
|
# Project-Specific Requirements
|
||||||
|
|
||||||
|
|
||||||
|
Project-specific material should live outside the generic sections above.
|
||||||
|
|
||||||
|
Recommended location:
|
||||||
|
|
||||||
|
```text
|
||||||
|
requirements/
|
||||||
|
```
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- `requirements/project.md`
|
||||||
|
- `requirements/architecture.md`
|
||||||
|
- `requirements/decisions.md`
|
||||||
|
- `requirements/domain.md`
|
||||||
|
|
||||||
|
Use for:
|
||||||
|
|
||||||
|
- Product and business requirements.
|
||||||
|
- Project goals and constraints.
|
||||||
|
- Architecture and design decisions.
|
||||||
|
- Domain knowledge that is specific to this repository.
|
||||||
|
|
||||||
|
# Agent-Level Variables
|
||||||
|
|
||||||
|
When present, `requirements/identifiers.yml` is an optional project-specific
|
||||||
|
input that defines agent-level variables for use inside `requirements/` and
|
||||||
|
`guidance/`.
|
||||||
|
|
||||||
|
Variable schema:
|
||||||
|
|
||||||
|
- Use `@{VARIABLE_NAME}` for agent-level variables.
|
||||||
|
- Prefer uppercase snake case names such as `@{PROJECT_ID}` or `@{VENDOR_ID}`.
|
||||||
|
- Do not treat `${...}` as an agent-level variable form; that syntax may appear
|
||||||
|
in Bash or other code and should not be interpreted as agent metadata.
|
||||||
|
|
||||||
|
Scope:
|
||||||
|
|
||||||
|
- The effective scope of `requirements/identifiers.yml` is limited to
|
||||||
|
`requirements/` and `guidance/`.
|
||||||
|
- Definitions from `requirements/identifiers.yml` must not leak into product code.
|
||||||
|
|
||||||
|
Defaults:
|
||||||
|
|
||||||
|
- Default `@{VENDOR_ID}` is `osgw`.
|
||||||
|
- Default `@{PROJECT_ID}` is the current repository directory name.
|
||||||
|
|
||||||
|
Resolution rules:
|
||||||
|
|
||||||
|
- Treat `requirements/identifiers.yml` as optional; when it is absent, agents
|
||||||
|
may still resolve the defaults defined above.
|
||||||
|
- If a variable is used in `requirements/` or `guidance/` and it is not
|
||||||
|
defined in `requirements/identifiers.yml` and does not have a default in this
|
||||||
|
file, agents may stop and report the undefined variable.
|
||||||
|
- Prefer updating duplicated identifier values in `requirements/` and
|
||||||
|
`guidance/` to use the variable schema when that improves consistency.
|
||||||
|
|
||||||
|
# Precedence
|
||||||
|
|
||||||
|
Some precedence levels may be absent because optional levels can remain inside
|
||||||
|
HTML comments. The smaller numeric index wins.
|
||||||
|
|
||||||
|
Apply guidance in this order:
|
||||||
|
|
||||||
|
1. Direct user or task instructions.
|
||||||
|
2. Project-specific documents in `requirements/`.
|
||||||
|
<!-- 3. Released-product documentation in `docs/` when shipped behavior or
|
||||||
|
user-facing expectations are relevant. -->
|
||||||
|
4. Relevant modular guides in `guidance/stacks/`, `guidance/conventions/`, or `guidance/workflows/`.
|
||||||
|
<!-- 5. Agent output in `process/` when prior prompts, rationale, or
|
||||||
|
implementation notes are relevant. -->
|
||||||
|
6. This `AGENTS.md`.
|
||||||
|
|
||||||
|
# Maintenance
|
||||||
|
|
||||||
|
- Keep this file short and stable.
|
||||||
|
- Move detail into dedicated modules when a section becomes too specific or too long.
|
||||||
|
- Add new guideline files only when they solve a recurring need.
|
||||||
|
- Remove outdated references when the repository structure changes.
|
||||||
|
|
||||||
|
# Current Status
|
||||||
|
|
||||||
|
This repository defines the base `AGENTS.md` structure plus project-specific
|
||||||
|
requirements and modular guidance.
|
||||||
|
|
||||||
|
Future project work can add:
|
||||||
|
|
||||||
|
- Reusable modules under `guidance/`
|
||||||
|
- Project-specific documentation under `requirements/`
|
||||||
|
- Optional temporary iteration context in `SCRATCHPAD.md`
|
||||||
|
- Optional released-product documentation under `docs/` by uncommenting its segment
|
||||||
|
- Optional agent output under `process/` by uncommenting its segment
|
||||||
|
- Cross-references from this file once those documents exist
|
||||||
141
README.md
141
README.md
@@ -1,48 +1,147 @@
|
|||||||
# FFX
|
# FFX
|
||||||
|
|
||||||
|
FFX is a local CLI and Textual TUI for inspecting TV episode files, storing normalization rules in SQLite, and converting outputs into a predictable stream, metadata, and filename layout.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- Linux-like environment
|
||||||
|
- `python3`
|
||||||
|
- `ffmpeg`
|
||||||
|
- `ffprobe`
|
||||||
|
- `cpulimit`
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
per https:
|
FFX uses a two-step local setup flow.
|
||||||
|
|
||||||
|
### 1. Install The Bundle
|
||||||
|
|
||||||
|
This step creates or reuses the persistent bundle virtualenv in `~/.local/share/ffx.venv`, installs FFX into it, and ensures `ffx` is exposed through a shell alias.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
pip install https://<URL>/<Releaser>/ffx.git@<Branch>
|
bash tools/setup.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
per git:
|
If you also want the Python packages needed for the modern test suite:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
pip install git+ssh://<Username>@<URL>/<Releaser>/ffx.git@<Branch>
|
bash tools/setup.sh --with-tests
|
||||||
```
|
```
|
||||||
|
|
||||||
## Version history
|
You can verify the bundle state without changing anything:
|
||||||
|
|
||||||
### 0.1.1
|
```sh
|
||||||
|
bash tools/setup.sh --check
|
||||||
|
```
|
||||||
|
|
||||||
Bugfixes, TMBD identify shows
|
### 2. Prepare System Dependencies And Local User Files
|
||||||
|
|
||||||
### 0.1.2
|
This step installs or verifies workstation dependencies and seeds local config and data directories. It is the step wrapped by the CLI command `ffx configure_workstation`.
|
||||||
|
|
||||||
Bugfixes
|
Run it directly:
|
||||||
|
|
||||||
### 0.1.3
|
```sh
|
||||||
|
bash tools/configure_workstation.sh
|
||||||
|
```
|
||||||
|
|
||||||
Subtitle file imports
|
Or through the installed CLI:
|
||||||
|
|
||||||
### 0.2.0
|
```sh
|
||||||
|
ffx configure_workstation
|
||||||
|
```
|
||||||
|
|
||||||
Tests, Config-File
|
Check-only mode is available in both forms:
|
||||||
|
|
||||||
### 0.2.1
|
```sh
|
||||||
|
bash tools/configure_workstation.sh --check
|
||||||
|
ffx configure_workstation --check
|
||||||
|
```
|
||||||
|
|
||||||
Signature, Tags cleaning, Bugfixes, Refactoring
|
`tools/configure_workstation.sh` does not manage the bundle virtualenv. Python-side test packages belong to `tools/setup.sh --with-tests`.
|
||||||
|
|
||||||
### 0.2.2
|
## Basic Usage
|
||||||
|
|
||||||
CLI-Overrides
|
Examples:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
ffx version
|
||||||
|
ffx inspect /path/to/episode.mkv
|
||||||
|
ffx convert /path/to/episode.mkv
|
||||||
|
ffx shows
|
||||||
|
```
|
||||||
|
|
||||||
|
## Modern Tests
|
||||||
|
|
||||||
|
Install Python test packages first:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
bash tools/setup.sh --with-tests
|
||||||
|
```
|
||||||
|
|
||||||
|
Then run the modern automatically discovered test suite:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
./tools/test.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
This runner uses `pytest` and intentionally excludes the legacy harness under `tests/legacy/`.
|
||||||
|
|
||||||
|
## Default Local Paths
|
||||||
|
|
||||||
|
- Config: `~/.local/etc/ffx.json`
|
||||||
|
- Database: `~/.local/var/ffx/ffx.db`
|
||||||
|
- Log file: `~/.local/var/log/ffx.log`
|
||||||
|
- Bundle venv: `~/.local/share/ffx.venv`
|
||||||
|
|
||||||
|
## TMDB
|
||||||
|
|
||||||
|
TMDB-backed metadata enrichment requires `TMDB_API_KEY` to be set in the environment.
|
||||||
|
|
||||||
|
## Version History
|
||||||
|
|
||||||
|
### 0.2.4
|
||||||
|
|
||||||
|
- lightweight CLI commands now stay import-light via lazy runtime loading
|
||||||
|
- setup/config templating moved to `assets/ffx.json.j2`
|
||||||
|
- aligned two-step local setup wrappers: `ffx setup` and `ffx configure_workstation`
|
||||||
|
- combined `ffprobe` payload reuse in `FileProperties`
|
||||||
|
- configurable crop-detect sampling plus per-process crop result caching
|
||||||
|
- single-query controller accessors and conditional DB schema bootstrap
|
||||||
|
- shared screen bootstrap/controller wiring for large detail screens
|
||||||
|
- configurable default season/episode digit lengths
|
||||||
|
- digit-aware `rename` and padded `unmux` filename markers
|
||||||
|
|
||||||
### 0.2.3
|
### 0.2.3
|
||||||
|
|
||||||
PyPi packaging
|
- PyPI packaging
|
||||||
Templating output filename
|
- output filename templating
|
||||||
Season shiftung
|
- season shifting
|
||||||
DB-Versionierung
|
- DB versioning
|
||||||
|
|
||||||
|
### 0.2.2
|
||||||
|
|
||||||
|
- CLI overrides
|
||||||
|
|
||||||
|
### 0.2.1
|
||||||
|
|
||||||
|
- signature handling
|
||||||
|
- tag cleanup
|
||||||
|
- bugfixes and refactoring
|
||||||
|
|
||||||
|
### 0.2.0
|
||||||
|
|
||||||
|
- tests
|
||||||
|
- config file
|
||||||
|
|
||||||
|
### 0.1.3
|
||||||
|
|
||||||
|
- subtitle file imports
|
||||||
|
|
||||||
|
### 0.1.2
|
||||||
|
|
||||||
|
- bugfixes
|
||||||
|
|
||||||
|
### 0.1.1
|
||||||
|
|
||||||
|
- bugfixes
|
||||||
|
- TMDB show identification
|
||||||
|
|||||||
128
SCRATCHPAD.md
Normal file
128
SCRATCHPAD.md
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
# Scratchpad
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
- Capture a compact, project-wide list of optimization candidates after a broad scan of the current FFX codebase, tooling, and requirements.
|
||||||
|
|
||||||
|
## Settled
|
||||||
|
|
||||||
|
- The biggest near-term wins are in startup cost, repeated subprocess work, repeated database query patterns, and general repo hygiene.
|
||||||
|
- This list is intentionally optimization-oriented rather than bug-oriented. Some items below also improve correctness or maintainability, but they were selected because they can reduce runtime cost, operator friction, or iteration overhead.
|
||||||
|
- A first modern integration slice now exists under [`tests/integration/subtrack_mapping`](/home/osgw/.local/src/codex/ffx/tests/integration/subtrack_mapping). Remaining test-suite cleanup is now mostly about migrating and shrinking the legacy harness surface under [`tests/legacy`](/home/osgw/.local/src/codex/ffx/tests/legacy).
|
||||||
|
- Shared CLI defaults for container/output tokens now live outside [`src/ffx/ffx_controller.py`](/home/osgw/.local/src/codex/ffx/src/ffx/ffx_controller.py), and a focused unit test locks in the lazy-import contract.
|
||||||
|
- Helper filename and rich-text utilities now use compiled raw regexes plus translate-based filename filtering, with unit coverage for TMDB suffix rewriting and Rich color stripping.
|
||||||
|
- Process resource limiting now has explicit disabled/default states in the CLI and requirements, and combined CPU-plus-niceness wrapping now executes as `cpulimit -- nice -n ... <command>` instead of a less explicit prefix chain.
|
||||||
|
- FFX logger setup now reuses named handlers, and fallback logger access no longer mutates handlers in ordinary constructors and helpers.
|
||||||
|
- The process wrapper now uses `subprocess.run(...)` with centralized command formatting plus stable timeout and missing-command error mapping.
|
||||||
|
- Pattern matching now uses cached compiled regexes plus explicit duplicate-match errors, and pattern creation flows no longer persist zero-track patterns.
|
||||||
|
|
||||||
|
## Focused Snapshot
|
||||||
|
|
||||||
|
- Highest-leverage application optimizations:
|
||||||
|
- Decide whether placeholder help/settings screens should ship or disappear.
|
||||||
|
- Trim dead helpers and other dormant surface that still looks active.
|
||||||
|
|
||||||
|
- Highest-leverage repo and workflow optimizations:
|
||||||
|
- Continue migrating the oversized legacy test/combinator surface into focused modern tests so it is easier to run, debug, and extend.
|
||||||
|
|
||||||
|
## Optimization Candidates
|
||||||
|
|
||||||
|
1. Placeholder UI surfaces should either ship or disappear
|
||||||
|
- [`src/ffx/help_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/help_screen.py) and [`src/ffx/settings_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/settings_screen.py) are placeholders.
|
||||||
|
- Optimization:
|
||||||
|
- Either remove them from the active UI surface or complete them.
|
||||||
|
- Avoid paying ongoing maintenance cost for unfinished navigation targets.
|
||||||
|
- Expected value:
|
||||||
|
- Leaner interface.
|
||||||
|
- Lower UX ambiguity.
|
||||||
|
|
||||||
|
2. Several helper functions are unfinished or dead-weight
|
||||||
|
- [`src/ffx/helper.py`](/home/osgw/.local/src/codex/ffx/src/ffx/helper.py) contains `permutateList(...): pass`.
|
||||||
|
- There are many combinator and conversion placeholders across tests and migrations.
|
||||||
|
- Optimization:
|
||||||
|
- Remove dead code, finish it, or isolate it behind a clearly dormant area.
|
||||||
|
- Avoid carrying stubbed utility surface that looks reusable but is not.
|
||||||
|
- Expected value:
|
||||||
|
- Smaller mental model.
|
||||||
|
- Less time spent re-evaluating inactive paths.
|
||||||
|
|
||||||
|
3. Test suite shape is expensive to understand and likely expensive to run
|
||||||
|
- The project still carries a large legacy matrix of combinator files under [`tests/legacy`](/home/osgw/.local/src/codex/ffx/tests/legacy), several placeholder `pass` implementations, and at least one suspicious filename with an embedded space: [`tests/legacy/disposition_combinator_2_3 .py`](/home/osgw/.local/src/codex/ffx/tests/legacy/disposition_combinator_2_3 .py).
|
||||||
|
- A first focused replacement slice now exists in [`tests/integration/subtrack_mapping/test_cli_bundle.py`](/home/osgw/.local/src/codex/ffx/tests/integration/subtrack_mapping/test_cli_bundle.py), so the remaining work is migration and consolidation rather than creating the modern test shape from scratch.
|
||||||
|
- Optimization:
|
||||||
|
- Continue replacing broad combinator matrices with focused parametrized integration and unit tests.
|
||||||
|
- Retire the bespoke legacy discovery and runner path once equivalent coverage exists.
|
||||||
|
- Normalize file naming and test discovery conventions.
|
||||||
|
- Expected value:
|
||||||
|
- Faster contributor onboarding.
|
||||||
|
- Easier CI adoption later.
|
||||||
|
|
||||||
|
## Open
|
||||||
|
|
||||||
|
- Should optimization work focus first on operator-perceived latency, internal maintainability, or correctness-risk cleanup that also has performance upside?
|
||||||
|
- Is the long-term supported model still “local Linux workstation plus Textual UI,” or should optimization decisions bias toward a more scriptable/headless CLI?
|
||||||
|
|
||||||
|
## Gaps Right Now
|
||||||
|
|
||||||
|
- No explicit prioritization owner or milestone for the optimization backlog.
|
||||||
|
- No benchmark or timing harness exists for startup, probe, DB, or conversion orchestration overhead.
|
||||||
|
- Repo hygiene is still mixed with generated artifacts and some clearly unfinished files.
|
||||||
|
- The legacy TMDB-backed `Scenario 4` path is currently blocked by a pattern/track regression: `Patterns must define at least one track before they can be stored.` This surfaced while rerunning TMDB-dependent checks after the zero-track pattern hardening.
|
||||||
|
|
||||||
|
## Next
|
||||||
|
|
||||||
|
1. Triage the list into quick wins, medium refactors, and long-horizon cleanup.
|
||||||
|
2. Tackle the cheapest remaining product-surface cleanup first:
|
||||||
|
- placeholder UI surfaces and dead helper cleanup.
|
||||||
|
3. Continue replacing oversized legacy test matrices with focused modern integration and unit coverage.
|
||||||
|
4. Triage the legacy `Scenario 4` pattern/track failure and decide whether to fix the harness, adapt it to the zero-track guard, or retire that path during the ongoing test-suite migration.
|
||||||
|
|
||||||
|
## Shifted Season Status (2026-04-12)
|
||||||
|
|
||||||
|
- Current assessment:
|
||||||
|
- The shifted-season subsystem is present end to end and looks feature-complete in shape, but it is not yet hardened.
|
||||||
|
- The storage, TUI CRUD surface, and CLI/TMDB filename application path all exist, so this is no longer a stubbed or half-started area.
|
||||||
|
- The main gap is correctness and direct verification rather than missing surface area.
|
||||||
|
|
||||||
|
- Implemented surface confirmed:
|
||||||
|
- Requirements still treat shifted seasons as part of the accepted product surface in [`requirements/project.md`](/home/osgw/.local/src/codex/ffx/requirements/project.md) and [`requirements/architecture.md`](/home/osgw/.local/src/codex/ffx/requirements/architecture.md).
|
||||||
|
- Persistence exists via [`src/ffx/model/shifted_season.py`](/home/osgw/.local/src/codex/ffx/src/ffx/model/shifted_season.py) plus the `Show.shifted_seasons` relationship in [`src/ffx/model/show.py`](/home/osgw/.local/src/codex/ffx/src/ffx/model/show.py).
|
||||||
|
- CRUD logic exists in [`src/ffx/shifted_season_controller.py`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_controller.py).
|
||||||
|
- Textual add/edit/delete flows are wired through [`src/ffx/shifted_season_details_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_details_screen.py), [`src/ffx/shifted_season_delete_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_delete_screen.py), and the show details table in [`src/ffx/show_details_screen.py`](/home/osgw/.local/src/codex/ffx/src/ffx/show_details_screen.py).
|
||||||
|
- CLI conversion applies season shifts before TMDB lookup and output suffix generation in [`src/ffx/cli.py`](/home/osgw/.local/src/codex/ffx/src/ffx/cli.py).
|
||||||
|
|
||||||
|
- Verified current behavior:
|
||||||
|
- `~/.local/share/ffx.venv/bin/python -m unittest discover -s tests/unit -p 'test_*.py'` passed on 2026-04-12: `75` tests in `0.795s`.
|
||||||
|
- That run emitted `ResourceWarning` messages for unclosed SQLite connections, so the suite is green but not perfectly clean.
|
||||||
|
- There is almost no direct shifted-season coverage in the modern tests:
|
||||||
|
- [`tests/unit/test_cli_rename_only.py`](/home/osgw/.local/src/codex/ffx/tests/unit/test_cli_rename_only.py) stubs `ShiftedSeasonController` rather than exercising it.
|
||||||
|
- [`tests/unit/test_screen_support.py`](/home/osgw/.local/src/codex/ffx/tests/unit/test_screen_support.py) only verifies controller bootstrap wiring.
|
||||||
|
- Net effect: the subsystem is integrated, but its core rules are effectively untested by the current modern suite.
|
||||||
|
|
||||||
|
- Reproduced correctness gaps:
|
||||||
|
- Overlap validation is broken in [`src/ffx/shifted_season_controller.py:41`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_controller.py:41) because `getOriginalSeason` is compared as a method object instead of being called.
|
||||||
|
- Reproduction on 2026-04-12 with a temp SQLite DB:
|
||||||
|
- Added `S1 E1-E10`.
|
||||||
|
- `checkShiftedSeason(...)` incorrectly returned `True` for overlapping `S1 E5-E15`.
|
||||||
|
- `addShiftedSeason(...)` then stored the overlapping row successfully.
|
||||||
|
- `updateShiftedSeason(...)` in [`src/ffx/shifted_season_controller.py:93`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_controller.py:93) does not enforce episode ordering, so an invalid range like `first_episode=20`, `last_episode=10` was accepted in the same reproduction.
|
||||||
|
- Because [`src/ffx/shifted_season_controller.py:213`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_controller.py:213) returns the first matching sibling and [`src/ffx/shifted_season_controller.py:163`](/home/osgw/.local/src/codex/ffx/src/ffx/shifted_season_controller.py:163) applies no explicit ordering, overlapping rows would also make runtime shifting ambiguous.
|
||||||
|
|
||||||
|
- Progress summary:
|
||||||
|
- Good progress:
|
||||||
|
- The subsystem exists across requirements, schema, UI, and conversion flow.
|
||||||
|
- It appears fully integrated into the show-editing workflow rather than parked as dead code.
|
||||||
|
- Incomplete progress:
|
||||||
|
- Validation logic is not trustworthy yet.
|
||||||
|
- Modern tests do not currently protect the subsystem's real behavior.
|
||||||
|
- User-facing error feedback in the shifted-season screens still has placeholder `#TODO: Meldung` branches.
|
||||||
|
|
||||||
|
- Recommended next slice:
|
||||||
|
1. Add direct controller tests for overlap rejection, episode-order validation, and `shiftSeason(...)` selection behavior.
|
||||||
|
2. Fix `checkShiftedSeason(...)` and add the same range/order validation to `updateShiftedSeason(...)`.
|
||||||
|
3. Make sibling selection deterministic or enforce non-overlap strongly enough that ordering no longer matters in practice.
|
||||||
|
4. Add at least one focused integration test that proves a stored shifted season changes TMDB lookup and/or generated filename numbering during conversion.
|
||||||
|
|
||||||
|
## Delete When
|
||||||
|
|
||||||
|
- Delete this scratchpad once the optimization backlog is either converted into issues/work items or distilled into durable project guidance.
|
||||||
36
assets/ffx.json.j2
Normal file
36
assets/ffx.json.j2
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
{
|
||||||
|
"databasePath": {{ database_path_json }},
|
||||||
|
"logDirectory": {{ log_directory_json }},
|
||||||
|
"subtitlesDirectory": {{ subtitles_directory_json }},
|
||||||
|
"defaultIndexSeasonDigits": {{ default_index_season_digits }},
|
||||||
|
"defaultIndexEpisodeDigits": {{ default_index_episode_digits }},
|
||||||
|
"defaultIndicatorSeasonDigits": {{ default_indicator_season_digits }},
|
||||||
|
"defaultIndicatorEpisodeDigits": {{ default_indicator_episode_digits }},
|
||||||
|
"metadata": {
|
||||||
|
"signature": {
|
||||||
|
"RECODED_WITH": "FFX"
|
||||||
|
},
|
||||||
|
"remove": [
|
||||||
|
"VERSION-eng",
|
||||||
|
"creation_time",
|
||||||
|
"NAME"
|
||||||
|
],
|
||||||
|
"streams": {
|
||||||
|
"remove": [
|
||||||
|
"BPS",
|
||||||
|
"NUMBER_OF_FRAMES",
|
||||||
|
"NUMBER_OF_BYTES",
|
||||||
|
"_STATISTICS_WRITING_APP",
|
||||||
|
"_STATISTICS_WRITING_DATE_UTC",
|
||||||
|
"_STATISTICS_TAGS",
|
||||||
|
"BPS-eng",
|
||||||
|
"DURATION-eng",
|
||||||
|
"NUMBER_OF_FRAMES-eng",
|
||||||
|
"NUMBER_OF_BYTES-eng",
|
||||||
|
"_STATISTICS_WRITING_APP-eng",
|
||||||
|
"_STATISTICS_WRITING_DATE_UTC-eng",
|
||||||
|
"_STATISTICS_TAGS-eng"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
from enum import Enum
|
|
||||||
from .track_type import TrackType
|
|
||||||
|
|
||||||
class AudioLayout(Enum):
|
|
||||||
|
|
||||||
LAYOUT_STEREO = {"label": "stereo", "index": 1}
|
|
||||||
LAYOUT_5_1 = {"label": "5.1(side)", "index": 2}
|
|
||||||
LAYOUT_6_1 = {"label": "6.1", "index": 3}
|
|
||||||
LAYOUT_7_1 = {"label": "7.1", "index": 4} #TODO: Does this exist?
|
|
||||||
|
|
||||||
LAYOUT_6CH = {"label": "6ch", "index": 5}
|
|
||||||
LAYOUT_5_0 = {"label": "5.0(side)", "index": 6}
|
|
||||||
|
|
||||||
LAYOUT_UNDEFINED = {"label": "undefined", "index": 0}
|
|
||||||
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
"""Returns the audio layout as string"""
|
|
||||||
return str(self.value['label'])
|
|
||||||
|
|
||||||
def index(self):
|
|
||||||
"""Returns the audio layout as integer"""
|
|
||||||
return int(self.value['index'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromLabel(label : str):
|
|
||||||
try:
|
|
||||||
|
|
||||||
return [a for a in AudioLayout if a.value['label'] == str(label)][0]
|
|
||||||
except:
|
|
||||||
return AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromIndex(index : int):
|
|
||||||
try:
|
|
||||||
return [a for a in AudioLayout if a.value['index'] == int(index)][0]
|
|
||||||
except:
|
|
||||||
return AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def identify(streamObj):
|
|
||||||
|
|
||||||
FFPROBE_LAYOUT_KEY = 'channel_layout'
|
|
||||||
FFPROBE_CHANNELS_KEY = 'channels'
|
|
||||||
FFPROBE_CODEC_TYPE_KEY = 'codec_type'
|
|
||||||
|
|
||||||
if (type(streamObj) is not dict
|
|
||||||
or FFPROBE_CODEC_TYPE_KEY not in streamObj.keys()
|
|
||||||
or streamObj[FFPROBE_CODEC_TYPE_KEY] != TrackType.AUDIO.label()):
|
|
||||||
raise Exception('Not an ffprobe audio stream object')
|
|
||||||
|
|
||||||
if FFPROBE_LAYOUT_KEY in streamObj.keys():
|
|
||||||
matchingLayouts = [l for l in AudioLayout if l.label() == streamObj[FFPROBE_LAYOUT_KEY]]
|
|
||||||
if matchingLayouts:
|
|
||||||
return matchingLayouts[0]
|
|
||||||
|
|
||||||
if (FFPROBE_CHANNELS_KEY in streamObj.keys()
|
|
||||||
and int(streamObj[FFPROBE_CHANNELS_KEY]) == 6):
|
|
||||||
|
|
||||||
return AudioLayout.LAYOUT_6CH
|
|
||||||
|
|
||||||
return AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
@@ -1,142 +0,0 @@
|
|||||||
import os, json
|
|
||||||
|
|
||||||
class ConfigurationController():
|
|
||||||
|
|
||||||
CONFIG_FILENAME = 'ffx.json'
|
|
||||||
DATABASE_FILENAME = 'ffx.db'
|
|
||||||
LOG_FILENAME = 'ffx.log'
|
|
||||||
|
|
||||||
DATABASE_PATH_CONFIG_KEY = 'databasePath'
|
|
||||||
LOG_DIRECTORY_CONFIG_KEY = 'logDirectory'
|
|
||||||
OUTPUT_FILENAME_TEMPLATE_KEY = 'outputFilenameTemplate'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
|
|
||||||
self.__homeDir = os.path.expanduser("~")
|
|
||||||
self.__localVarDir = os.path.join(self.__homeDir, '.local', 'var')
|
|
||||||
self.__localEtcDir = os.path.join(self.__homeDir, '.local', 'etc')
|
|
||||||
|
|
||||||
self.__configurationData = {}
|
|
||||||
|
|
||||||
# .local/etc/ffx.json
|
|
||||||
self.__configFilePath = os.path.join(self.__localEtcDir, ConfigurationController.CONFIG_FILENAME)
|
|
||||||
if os.path.isfile(self.__configFilePath):
|
|
||||||
with open(self.__configFilePath, 'r') as configurationFile:
|
|
||||||
self.__configurationData = json.load(configurationFile)
|
|
||||||
|
|
||||||
if ConfigurationController.DATABASE_PATH_CONFIG_KEY in self.__configurationData.keys():
|
|
||||||
self.__databaseFilePath = self.__configurationData[ConfigurationController.DATABASE_PATH_CONFIG_KEY]
|
|
||||||
os.makedirs(os.path.dirname(self.__databaseFilePath), exist_ok=True)
|
|
||||||
else:
|
|
||||||
ffxVarDir = os.path.join(self.__localVarDir, 'ffx')
|
|
||||||
os.makedirs(ffxVarDir, exist_ok=True)
|
|
||||||
self.__databaseFilePath = os.path.join(ffxVarDir, ConfigurationController.DATABASE_FILENAME)
|
|
||||||
|
|
||||||
if ConfigurationController.LOG_DIRECTORY_CONFIG_KEY in self.__configurationData.keys():
|
|
||||||
self.__logDir = self.__configurationData[ConfigurationController.LOG_DIRECTORY_CONFIG_KEY]
|
|
||||||
else:
|
|
||||||
self.__logDir = os.path.join(self.__localVarDir, 'log')
|
|
||||||
os.makedirs(self.__logDir, exist_ok=True)
|
|
||||||
|
|
||||||
|
|
||||||
def getHomeDirectory(self):
|
|
||||||
return self.__homeDir
|
|
||||||
|
|
||||||
def getLogFilePath(self):
|
|
||||||
return os.path.join(self.__logDir, ConfigurationController.LOG_FILENAME)
|
|
||||||
|
|
||||||
def getDatabaseFilePath(self):
|
|
||||||
return self.__databaseFilePath
|
|
||||||
|
|
||||||
|
|
||||||
def getData(self):
|
|
||||||
return self.__configurationData
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# def addPattern(self, patternDescriptor):
|
|
||||||
#
|
|
||||||
# try:
|
|
||||||
#
|
|
||||||
# s = self.Session()
|
|
||||||
# q = s.query(Pattern).filter(Pattern.show_id == int(patternDescriptor['show_id']),
|
|
||||||
# Pattern.pattern == str(patternDescriptor['pattern']))
|
|
||||||
#
|
|
||||||
# if not q.count():
|
|
||||||
# pattern = Pattern(show_id = int(patternDescriptor['show_id']),
|
|
||||||
# pattern = str(patternDescriptor['pattern']))
|
|
||||||
# s.add(pattern)
|
|
||||||
# s.commit()
|
|
||||||
# return pattern.getId()
|
|
||||||
# else:
|
|
||||||
# return 0
|
|
||||||
#
|
|
||||||
# except Exception as ex:
|
|
||||||
# raise click.ClickException(f"PatternController.addPattern(): {repr(ex)}")
|
|
||||||
# finally:
|
|
||||||
# s.close()
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# def updatePattern(self, patternId, patternDescriptor):
|
|
||||||
#
|
|
||||||
# try:
|
|
||||||
# s = self.Session()
|
|
||||||
# q = s.query(Pattern).filter(Pattern.id == int(patternId))
|
|
||||||
#
|
|
||||||
# if q.count():
|
|
||||||
#
|
|
||||||
# pattern = q.first()
|
|
||||||
#
|
|
||||||
# pattern.show_id = int(patternDescriptor['show_id'])
|
|
||||||
# pattern.pattern = str(patternDescriptor['pattern'])
|
|
||||||
#
|
|
||||||
# s.commit()
|
|
||||||
# return True
|
|
||||||
#
|
|
||||||
# else:
|
|
||||||
# return False
|
|
||||||
#
|
|
||||||
# except Exception as ex:
|
|
||||||
# raise click.ClickException(f"PatternController.updatePattern(): {repr(ex)}")
|
|
||||||
# finally:
|
|
||||||
# s.close()
|
|
||||||
#
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# def findPattern(self, patternDescriptor):
|
|
||||||
#
|
|
||||||
# try:
|
|
||||||
# s = self.Session()
|
|
||||||
# q = s.query(Pattern).filter(Pattern.show_id == int(patternDescriptor['show_id']), Pattern.pattern == str(patternDescriptor['pattern']))
|
|
||||||
#
|
|
||||||
# if q.count():
|
|
||||||
# pattern = q.first()
|
|
||||||
# return int(pattern.id)
|
|
||||||
# else:
|
|
||||||
# return None
|
|
||||||
#
|
|
||||||
# except Exception as ex:
|
|
||||||
# raise click.ClickException(f"PatternController.findPattern(): {repr(ex)}")
|
|
||||||
# finally:
|
|
||||||
# s.close()
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# def getPattern(self, patternId : int):
|
|
||||||
#
|
|
||||||
# if type(patternId) is not int:
|
|
||||||
# raise ValueError(f"PatternController.getPattern(): Argument patternId is required to be of type int")
|
|
||||||
#
|
|
||||||
# try:
|
|
||||||
# s = self.Session()
|
|
||||||
# q = s.query(Pattern).filter(Pattern.id == int(patternId))
|
|
||||||
#
|
|
||||||
# return q.first() if q.count() else None
|
|
||||||
#
|
|
||||||
# except Exception as ex:
|
|
||||||
# raise click.ClickException(f"PatternController.getPattern(): {repr(ex)}")
|
|
||||||
# finally:
|
|
||||||
# s.close()
|
|
||||||
#
|
|
||||||
@@ -1,15 +0,0 @@
|
|||||||
VERSION='0.2.3'
|
|
||||||
DATABASE_VERSION = 2
|
|
||||||
|
|
||||||
DEFAULT_QUALITY = 32
|
|
||||||
DEFAULT_AV1_PRESET = 5
|
|
||||||
|
|
||||||
DEFAULT_STEREO_BANDWIDTH = "112"
|
|
||||||
DEFAULT_AC3_BANDWIDTH = "256"
|
|
||||||
DEFAULT_DTS_BANDWIDTH = "320"
|
|
||||||
DEFAULT_7_1_BANDWIDTH = "384"
|
|
||||||
|
|
||||||
DEFAULT_cut_start = 60
|
|
||||||
DEFAULT_cut_length = 180
|
|
||||||
|
|
||||||
DEFAULT_OUTPUT_FILENAME_TEMPLATE = '{{ ffx_show_name }} - {{ ffx_index }}{{ ffx_index_separator }}{{ ffx_episode_name }}{{ ffx_indicator_separator }}{{ ffx_indicator }}'
|
|
||||||
@@ -1,102 +0,0 @@
|
|||||||
import os, click
|
|
||||||
|
|
||||||
from sqlalchemy import create_engine
|
|
||||||
from sqlalchemy.orm import sessionmaker
|
|
||||||
|
|
||||||
from ffx.model.show import Base
|
|
||||||
|
|
||||||
from ffx.model.property import Property
|
|
||||||
|
|
||||||
from ffx.constants import DATABASE_VERSION
|
|
||||||
|
|
||||||
|
|
||||||
DATABASE_VERSION_KEY = 'database_version'
|
|
||||||
|
|
||||||
class DatabaseVersionException(Exception):
|
|
||||||
def __init__(self, errorMessage):
|
|
||||||
super().__init__(errorMessage)
|
|
||||||
|
|
||||||
def databaseContext(databasePath: str = ''):
|
|
||||||
|
|
||||||
databaseContext = {}
|
|
||||||
|
|
||||||
if databasePath is None:
|
|
||||||
# sqlite:///:memory:
|
|
||||||
databasePath = ':memory:'
|
|
||||||
elif not databasePath:
|
|
||||||
homeDir = os.path.expanduser("~")
|
|
||||||
ffxVarDir = os.path.join(homeDir, '.local', 'var', 'ffx')
|
|
||||||
if not os.path.exists(ffxVarDir):
|
|
||||||
os.makedirs(ffxVarDir)
|
|
||||||
databasePath = os.path.join(ffxVarDir, 'ffx.db')
|
|
||||||
|
|
||||||
databaseContext['url'] = f"sqlite:///{databasePath}"
|
|
||||||
databaseContext['engine'] = create_engine(databaseContext['url'])
|
|
||||||
databaseContext['session'] = sessionmaker(bind=databaseContext['engine'])
|
|
||||||
|
|
||||||
Base.metadata.create_all(databaseContext['engine'])
|
|
||||||
|
|
||||||
# isSyncronuous = False
|
|
||||||
# while not isSyncronuous:
|
|
||||||
# while True:
|
|
||||||
# try:
|
|
||||||
# with databaseContext['database_engine'].connect() as connection:
|
|
||||||
# connection.execute(sqlalchemy.text('PRAGMA foreign_keys=ON;'))
|
|
||||||
# #isSyncronuous = True
|
|
||||||
# break
|
|
||||||
# except sqlite3.OperationalError:
|
|
||||||
# time.sleep(0.1)
|
|
||||||
|
|
||||||
ensureDatabaseVersion(databaseContext)
|
|
||||||
|
|
||||||
return databaseContext
|
|
||||||
|
|
||||||
def ensureDatabaseVersion(databaseContext):
|
|
||||||
|
|
||||||
currentDatabaseVersion = getDatabaseVersion(databaseContext)
|
|
||||||
if currentDatabaseVersion:
|
|
||||||
if currentDatabaseVersion != DATABASE_VERSION:
|
|
||||||
raise DatabaseVersionException(f"Current database version ({currentDatabaseVersion}) does not match required ({DATABASE_VERSION})")
|
|
||||||
else:
|
|
||||||
setDatabaseVersion(databaseContext, DATABASE_VERSION)
|
|
||||||
|
|
||||||
|
|
||||||
def getDatabaseVersion(databaseContext):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
Session = databaseContext['session']
|
|
||||||
s = Session()
|
|
||||||
q = s.query(Property).filter(Property.key == DATABASE_VERSION_KEY)
|
|
||||||
|
|
||||||
return int(q.first().value) if q.count() else 0
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"getDatabaseVersion(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def setDatabaseVersion(databaseContext, databaseVersion: int):
|
|
||||||
|
|
||||||
try:
|
|
||||||
Session = databaseContext['session']
|
|
||||||
s = Session()
|
|
||||||
|
|
||||||
q = s.query(Property).filter(Property.key == DATABASE_VERSION_KEY)
|
|
||||||
|
|
||||||
dbVersion = int(databaseVersion)
|
|
||||||
|
|
||||||
versionProperty = q.first()
|
|
||||||
if versionProperty:
|
|
||||||
versionProperty.value = str(dbVersion)
|
|
||||||
else:
|
|
||||||
versionProperty = Property(key = DATABASE_VERSION_KEY,
|
|
||||||
value = str(dbVersion))
|
|
||||||
s.add(versionProperty)
|
|
||||||
s.commit()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"setDatabaseVersion(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
@@ -1,809 +0,0 @@
|
|||||||
#! /usr/bin/python3
|
|
||||||
|
|
||||||
import os, click, time, logging, shutil
|
|
||||||
|
|
||||||
from ffx.configuration_controller import ConfigurationController
|
|
||||||
|
|
||||||
from ffx.file_properties import FileProperties
|
|
||||||
|
|
||||||
from ffx.ffx_app import FfxApp
|
|
||||||
from ffx.ffx_controller import FfxController
|
|
||||||
from ffx.tmdb_controller import TmdbController
|
|
||||||
|
|
||||||
from ffx.database import databaseContext
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
from ffx.video_encoder import VideoEncoder
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
|
|
||||||
from ffx.process import executeProcess
|
|
||||||
from ffx.helper import filterFilename, substituteTmdbFilename
|
|
||||||
from ffx.helper import getEpisodeFileBasename
|
|
||||||
|
|
||||||
from ffx.constants import DEFAULT_STEREO_BANDWIDTH, DEFAULT_AC3_BANDWIDTH, DEFAULT_DTS_BANDWIDTH, DEFAULT_7_1_BANDWIDTH
|
|
||||||
|
|
||||||
from ffx.filter.quality_filter import QualityFilter
|
|
||||||
from ffx.filter.preset_filter import PresetFilter
|
|
||||||
|
|
||||||
from ffx.filter.crop_filter import CropFilter
|
|
||||||
from ffx.filter.nlmeans_filter import NlmeansFilter
|
|
||||||
|
|
||||||
from ffx.constants import VERSION
|
|
||||||
|
|
||||||
from ffx.shifted_season_controller import ShiftedSeasonController
|
|
||||||
|
|
||||||
|
|
||||||
@click.group()
|
|
||||||
@click.pass_context
|
|
||||||
@click.option('--database-file', type=str, default='', help='Path to database file')
|
|
||||||
@click.option('-v', '--verbose', type=int, default=0, help='Set verbosity of output')
|
|
||||||
@click.option("--dry-run", is_flag=True, default=False)
|
|
||||||
def ffx(ctx, database_file, verbose, dry_run):
|
|
||||||
"""FFX"""
|
|
||||||
|
|
||||||
ctx.obj = {}
|
|
||||||
|
|
||||||
ctx.obj['config'] = ConfigurationController()
|
|
||||||
|
|
||||||
ctx.obj['database'] = databaseContext(databasePath=database_file
|
|
||||||
if database_file else ctx.obj['config'].getDatabaseFilePath())
|
|
||||||
|
|
||||||
ctx.obj['dry_run'] = dry_run
|
|
||||||
ctx.obj['verbosity'] = verbose
|
|
||||||
|
|
||||||
# Critical 50
|
|
||||||
# Error 40
|
|
||||||
# Warning 30
|
|
||||||
# Info 20
|
|
||||||
# Debug 10
|
|
||||||
fileLogVerbosity = max(40 - verbose * 10, 10)
|
|
||||||
consoleLogVerbosity = max(20 - verbose * 10, 10)
|
|
||||||
|
|
||||||
ctx.obj['logger'] = logging.getLogger('FFX')
|
|
||||||
ctx.obj['logger'].setLevel(logging.DEBUG)
|
|
||||||
|
|
||||||
ffxFileHandler = logging.FileHandler(ctx.obj['config'].getLogFilePath())
|
|
||||||
ffxFileHandler.setLevel(fileLogVerbosity)
|
|
||||||
ffxConsoleHandler = logging.StreamHandler()
|
|
||||||
ffxConsoleHandler.setLevel(consoleLogVerbosity)
|
|
||||||
|
|
||||||
fileFormatter = logging.Formatter(
|
|
||||||
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
|
||||||
ffxFileHandler.setFormatter(fileFormatter)
|
|
||||||
consoleFormatter = logging.Formatter(
|
|
||||||
'%(message)s')
|
|
||||||
ffxConsoleHandler.setFormatter(consoleFormatter)
|
|
||||||
|
|
||||||
ctx.obj['logger'].addHandler(ffxConsoleHandler)
|
|
||||||
ctx.obj['logger'].addHandler(ffxFileHandler)
|
|
||||||
|
|
||||||
|
|
||||||
# Define a subcommand
|
|
||||||
@ffx.command()
|
|
||||||
def version():
|
|
||||||
click.echo(VERSION)
|
|
||||||
|
|
||||||
|
|
||||||
# Another subcommand
|
|
||||||
@ffx.command()
|
|
||||||
def help():
|
|
||||||
click.echo(f"ffx {VERSION}\n")
|
|
||||||
click.echo(f"Usage: ffx [input file] [output file] [vp9|av1] [q=[nn[,nn,...]]] [p=nn] [a=nnn[k]] [ac3=nnn[k]] [dts=nnn[k]] [crop]")
|
|
||||||
|
|
||||||
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
@click.argument('filename', nargs=1)
|
|
||||||
def inspect(ctx, filename):
|
|
||||||
|
|
||||||
ctx.obj['command'] = 'inspect'
|
|
||||||
ctx.obj['arguments'] = {}
|
|
||||||
ctx.obj['arguments']['filename'] = filename
|
|
||||||
|
|
||||||
app = FfxApp(ctx.obj)
|
|
||||||
app.run()
|
|
||||||
|
|
||||||
|
|
||||||
def getUnmuxSequence(trackDescriptor: TrackDescriptor, sourcePath, targetPrefix, targetDirectory = ''):
|
|
||||||
|
|
||||||
# executable and input file
|
|
||||||
commandTokens = FfxController.COMMAND_TOKENS + ['-i', sourcePath]
|
|
||||||
|
|
||||||
trackType = trackDescriptor.getType()
|
|
||||||
|
|
||||||
targetPathBase = os.path.join(targetDirectory, targetPrefix) if targetDirectory else targetPrefix
|
|
||||||
|
|
||||||
# mapping
|
|
||||||
commandTokens += ['-map',
|
|
||||||
f"0:{trackType.indicator()}:{trackDescriptor.getSubIndex()}",
|
|
||||||
'-c',
|
|
||||||
'copy']
|
|
||||||
|
|
||||||
trackCodec = trackDescriptor.getCodec()
|
|
||||||
|
|
||||||
# output format
|
|
||||||
codecFormat = trackCodec.format()
|
|
||||||
if codecFormat is not None:
|
|
||||||
commandTokens += ['-f', codecFormat]
|
|
||||||
|
|
||||||
# output filename
|
|
||||||
commandTokens += [f"{targetPathBase}.{trackCodec.extension()}"]
|
|
||||||
|
|
||||||
return commandTokens
|
|
||||||
|
|
||||||
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
|
|
||||||
@click.argument('paths', nargs=-1)
|
|
||||||
@click.option('-l', '--label', type=str, default='', help='Label to be used as filename prefix')
|
|
||||||
@click.option("-o", "--output-directory", type=str, default='')
|
|
||||||
@click.option("-s", "--subtitles-only", is_flag=True, default=False)
|
|
||||||
@click.option('--nice', type=int, default=99, help='Niceness of started processes')
|
|
||||||
@click.option('--cpu', type=int, default=0, help='Limit CPU for started processes to percent')
|
|
||||||
def unmux(ctx,
|
|
||||||
paths,
|
|
||||||
label,
|
|
||||||
output_directory,
|
|
||||||
subtitles_only,
|
|
||||||
nice,
|
|
||||||
cpu):
|
|
||||||
|
|
||||||
existingSourcePaths = [p for p in paths if os.path.isfile(p)]
|
|
||||||
ctx.obj['logger'].debug(f"\nUnmuxing {len(existingSourcePaths)} files")
|
|
||||||
|
|
||||||
ctx.obj['resource_limits'] = {}
|
|
||||||
ctx.obj['resource_limits']['niceness'] = nice
|
|
||||||
ctx.obj['resource_limits']['cpu_percent'] = cpu
|
|
||||||
|
|
||||||
for sourcePath in existingSourcePaths:
|
|
||||||
|
|
||||||
fp = FileProperties(ctx.obj, sourcePath)
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
sourceMediaDescriptor = fp.getMediaDescriptor()
|
|
||||||
|
|
||||||
season = fp.getSeason()
|
|
||||||
episode = fp.getEpisode()
|
|
||||||
|
|
||||||
#TODO: Recognition für alle Formate anpassen
|
|
||||||
targetLabel = label if label else fp.getFileBasename()
|
|
||||||
targetIndicator = f"_S{season}E{episode}" if label and season != -1 and episode != -1 else ''
|
|
||||||
|
|
||||||
if label and not targetIndicator:
|
|
||||||
ctx.obj['logger'].warning(f"Skipping file {fp.getFilename()}: Label set but no indicator recognized")
|
|
||||||
continue
|
|
||||||
else:
|
|
||||||
ctx.obj['logger'].info(f"\nUnmuxing file {fp.getFilename()}\n")
|
|
||||||
|
|
||||||
# for trackDescriptor in sourceMediaDescriptor.getAllTrackDescriptors():
|
|
||||||
for trackDescriptor in sourceMediaDescriptor.getTrackDescriptors():
|
|
||||||
|
|
||||||
if trackDescriptor.getType() == TrackType.SUBTITLE or not subtitles_only:
|
|
||||||
|
|
||||||
# SEASON_EPISODE_STREAM_LANGUAGE_DISPOSITIONS_MATCH = '[sS]([0-9]+)[eE]([0-9]+)_([0-9]+)_([a-z]{3})(?:_([A-Z]{3}))*'
|
|
||||||
targetPrefix = f"{targetLabel}{targetIndicator}_{trackDescriptor.getIndex()}_{trackDescriptor.getLanguage().threeLetter()}"
|
|
||||||
|
|
||||||
td: TrackDisposition
|
|
||||||
for td in sorted(trackDescriptor.getDispositionSet(), key=lambda d: d.index()):
|
|
||||||
targetPrefix += f"_{td.indicator()}"
|
|
||||||
|
|
||||||
unmuxSequence = getUnmuxSequence(trackDescriptor, sourcePath, targetPrefix, targetDirectory = output_directory)
|
|
||||||
|
|
||||||
if unmuxSequence:
|
|
||||||
if not ctx.obj['dry_run']:
|
|
||||||
|
|
||||||
#TODO #425: Codec Enum
|
|
||||||
ctx.obj['logger'].info(f"Unmuxing stream {trackDescriptor.getIndex()} into file {targetPrefix}.{trackDescriptor.getCodec().extension()}")
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"Executing unmuxing sequence")
|
|
||||||
|
|
||||||
out, err, rc = executeProcess(unmuxSequence, context = ctx.obj)
|
|
||||||
if rc:
|
|
||||||
ctx.obj['logger'].error(f"Unmuxing of stream {trackDescriptor.getIndex()} failed with error ({rc}) {err}")
|
|
||||||
else:
|
|
||||||
ctx.obj['logger'].warning(f"Skipping stream with unknown codec")
|
|
||||||
except Exception as ex:
|
|
||||||
ctx.obj['logger'].warning(f"Skipping File {sourcePath} ({ex})")
|
|
||||||
|
|
||||||
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
|
|
||||||
@click.argument('paths', nargs=-1)
|
|
||||||
@click.option('--nice', type=int, default=99, help='Niceness of started processes')
|
|
||||||
@click.option('--cpu', type=int, default=0, help='Limit CPU for started processes to percent')
|
|
||||||
def cropdetect(ctx,
|
|
||||||
paths,
|
|
||||||
nice,
|
|
||||||
cpu):
|
|
||||||
|
|
||||||
existingSourcePaths = [p for p in paths if os.path.isfile(p)]
|
|
||||||
ctx.obj['logger'].debug(f"\nUnmuxing {len(existingSourcePaths)} files")
|
|
||||||
|
|
||||||
ctx.obj['resource_limits'] = {}
|
|
||||||
ctx.obj['resource_limits']['niceness'] = nice
|
|
||||||
ctx.obj['resource_limits']['cpu_percent'] = cpu
|
|
||||||
|
|
||||||
for sourcePath in existingSourcePaths:
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
fp = FileProperties(ctx.obj, sourcePath)
|
|
||||||
cropParams = fp.findCropParams()
|
|
||||||
|
|
||||||
click.echo(cropParams)
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
ctx.obj['logger'].warning(f"Skipping File {sourcePath} ({ex})")
|
|
||||||
|
|
||||||
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
|
|
||||||
def shows(ctx):
|
|
||||||
|
|
||||||
ctx.obj['command'] = 'shows'
|
|
||||||
|
|
||||||
app = FfxApp(ctx.obj)
|
|
||||||
app.run()
|
|
||||||
|
|
||||||
|
|
||||||
def checkUniqueDispositions(context, mediaDescriptor: MediaDescriptor):
|
|
||||||
|
|
||||||
# Check for multiple default or forced dispositions if not set by user input or database requirements
|
|
||||||
#
|
|
||||||
# Query user for the correct sub indices, then configure flags in track descriptors associated with media descriptor accordingly.
|
|
||||||
# The correct tokens should then be created by
|
|
||||||
if len([v for v in mediaDescriptor.getVideoTracks() if v.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one default video stream detected and no prompt set')
|
|
||||||
defaultVideoTrackSubIndex = click.prompt("More than one default video stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setDefaultSubTrack(TrackType.VIDEO, defaultVideoTrackSubIndex)
|
|
||||||
|
|
||||||
if len([v for v in mediaDescriptor.getVideoTracks() if v.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one forced video stream detected and no prompt set')
|
|
||||||
forcedVideoTrackSubIndex = click.prompt("More than one forced video stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setForcedSubTrack(TrackType.VIDEO, forcedVideoTrackSubIndex)
|
|
||||||
|
|
||||||
if len([a for a in mediaDescriptor.getAudioTracks() if a.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one default audio stream detected and no prompt set')
|
|
||||||
defaultAudioTrackSubIndex = click.prompt("More than one default audio stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setDefaultSubTrack(TrackType.AUDIO, defaultAudioTrackSubIndex)
|
|
||||||
|
|
||||||
if len([a for a in mediaDescriptor.getAudioTracks() if a.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one forced audio stream detected and no prompt set')
|
|
||||||
forcedAudioTrackSubIndex = click.prompt("More than one forced audio stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setForcedSubTrack(TrackType.AUDIO, forcedAudioTrackSubIndex)
|
|
||||||
|
|
||||||
if len([s for s in mediaDescriptor.getSubtitleTracks() if s.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one default subtitle stream detected and no prompt set')
|
|
||||||
defaultSubtitleTrackSubIndex = click.prompt("More than one default subtitle stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setDefaultSubTrack(TrackType.SUBTITLE, defaultSubtitleTrackSubIndex)
|
|
||||||
|
|
||||||
if len([s for s in mediaDescriptor.getSubtitleTracks() if s.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one forced subtitle stream detected and no prompt set')
|
|
||||||
forcedSubtitleTrackSubIndex = click.prompt("More than one forced subtitle stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setForcedSubTrack(TrackType.SUBTITLE, forcedSubtitleTrackSubIndex)
|
|
||||||
|
|
||||||
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
|
|
||||||
@click.argument('paths', nargs=-1)
|
|
||||||
|
|
||||||
@click.option('-l', '--label', type=str, default='', help='Label to be used as filename prefix')
|
|
||||||
|
|
||||||
@click.option('-v', '--video-encoder', type=str, default=FfxController.DEFAULT_VIDEO_ENCODER, help=f"Target video encoder (vp9, av1 or h264)", show_default=True)
|
|
||||||
|
|
||||||
@click.option('-q', '--quality', type=str, default="", help=f"Quality settings to be used with VP9/H264 encoder")
|
|
||||||
@click.option('-p', '--preset', type=str, default="", help=f"Quality preset to be used with AV1 encoder")
|
|
||||||
|
|
||||||
@click.option('-a', '--stereo-bitrate', type=int, default=DEFAULT_STEREO_BANDWIDTH, help=f"Bitrate in kbit/s to be used to encode stereo audio streams", show_default=True)
|
|
||||||
@click.option('--ac3', type=int, default=DEFAULT_AC3_BANDWIDTH, help=f"Bitrate in kbit/s to be used to encode 5.1 audio streams", show_default=True)
|
|
||||||
@click.option('--dts', type=int, default=DEFAULT_DTS_BANDWIDTH, help=f"Bitrate in kbit/s to be used to encode 6.1 audio streams", show_default=True)
|
|
||||||
|
|
||||||
@click.option('--subtitle-directory', type=str, default='', help='Load subtitles from here')
|
|
||||||
@click.option('--subtitle-prefix', type=str, default='', help='Subtitle filename prefix')
|
|
||||||
|
|
||||||
@click.option('--language', type=str, multiple=True, help='Set stream language. Use format <stream index>:<3 letter iso code>')
|
|
||||||
@click.option('--title', type=str, multiple=True, help='Set stream title. Use format <stream index>:<title>')
|
|
||||||
|
|
||||||
@click.option('--default-video', type=int, default=-1, help='Index of default video stream')
|
|
||||||
@click.option('--forced-video', type=int, default=-1, help='Index of forced video stream')
|
|
||||||
@click.option('--default-audio', type=int, default=-1, help='Index of default audio stream')
|
|
||||||
@click.option('--forced-audio', type=int, default=-1, help='Index of forced audio stream')
|
|
||||||
@click.option('--default-subtitle', type=int, default=-1, help='Index of default subtitle stream')
|
|
||||||
@click.option('--forced-subtitle', type=int, default=-1, help='Index of forced subtitle stream')
|
|
||||||
|
|
||||||
@click.option('--rearrange-streams', type=str, default="", help='Rearrange output streams order. Use format comma separated integers')
|
|
||||||
|
|
||||||
@click.option("--crop", is_flag=False, flag_value="auto", default="none")
|
|
||||||
@click.option("--cut", is_flag=False, flag_value="default", default="none")
|
|
||||||
|
|
||||||
@click.option("--output-directory", type=str, default='')
|
|
||||||
|
|
||||||
@click.option("--denoise", is_flag=False, flag_value="default", default="none")
|
|
||||||
@click.option("--denoise-use-hw", is_flag=True, default=False)
|
|
||||||
@click.option('--denoise-strength', type=str, default='', help='Denoising strength, more blurring vs more details.')
|
|
||||||
@click.option('--denoise-patch-size', type=str, default='', help='Subimage size to apply filtering on luminosity plane. Reduces broader noise patterns but costly.')
|
|
||||||
@click.option('--denoise-chroma-patch-size', type=str, default='', help='Subimage size to apply filtering on chroma planes.')
|
|
||||||
@click.option('--denoise-research-window', type=str, default='', help='Range to search for comparable patches on luminosity plane. Better filtering but costly.')
|
|
||||||
@click.option('--denoise-chroma-research-window', type=str, default='', help='Range to search for comparable patches on chroma planes.')
|
|
||||||
|
|
||||||
@click.option('--show', type=int, default=-1, help='Set TMDB show identifier')
|
|
||||||
@click.option('--season', type=int, default=-1, help='Set season of show')
|
|
||||||
@click.option('--episode', type=int, default=-1, help='Set episode of show')
|
|
||||||
|
|
||||||
@click.option("--no-tmdb", is_flag=True, default=False)
|
|
||||||
@click.option("--no-pattern", is_flag=True, default=False)
|
|
||||||
|
|
||||||
@click.option("--dont-pass-dispositions", is_flag=True, default=False)
|
|
||||||
|
|
||||||
@click.option("--no-prompt", is_flag=True, default=False)
|
|
||||||
@click.option("--no-signature", is_flag=True, default=False)
|
|
||||||
@click.option("--keep-mkvmerge-metadata", is_flag=True, default=False)
|
|
||||||
|
|
||||||
@click.option('--nice', type=int, default=99, help='Niceness of started processes')
|
|
||||||
@click.option('--cpu', type=int, default=0, help='Limit CPU for started processes to percent')
|
|
||||||
|
|
||||||
@click.option('--rename-only', is_flag=True, default=False, help='Only renaming, no recoding')
|
|
||||||
|
|
||||||
def convert(ctx,
|
|
||||||
paths,
|
|
||||||
label,
|
|
||||||
video_encoder,
|
|
||||||
quality,
|
|
||||||
preset,
|
|
||||||
stereo_bitrate,
|
|
||||||
ac3,
|
|
||||||
dts,
|
|
||||||
|
|
||||||
subtitle_directory,
|
|
||||||
subtitle_prefix,
|
|
||||||
|
|
||||||
language,
|
|
||||||
title,
|
|
||||||
|
|
||||||
default_video,
|
|
||||||
forced_video,
|
|
||||||
default_audio,
|
|
||||||
forced_audio,
|
|
||||||
default_subtitle,
|
|
||||||
forced_subtitle,
|
|
||||||
|
|
||||||
rearrange_streams,
|
|
||||||
|
|
||||||
crop,
|
|
||||||
cut,
|
|
||||||
|
|
||||||
output_directory,
|
|
||||||
|
|
||||||
denoise,
|
|
||||||
denoise_use_hw,
|
|
||||||
denoise_strength,
|
|
||||||
denoise_patch_size,
|
|
||||||
denoise_chroma_patch_size,
|
|
||||||
denoise_research_window,
|
|
||||||
denoise_chroma_research_window,
|
|
||||||
|
|
||||||
show,
|
|
||||||
season,
|
|
||||||
episode,
|
|
||||||
|
|
||||||
no_tmdb,
|
|
||||||
no_pattern,
|
|
||||||
dont_pass_dispositions,
|
|
||||||
no_prompt,
|
|
||||||
no_signature,
|
|
||||||
keep_mkvmerge_metadata,
|
|
||||||
|
|
||||||
nice,
|
|
||||||
cpu,
|
|
||||||
rename_only):
|
|
||||||
"""Batch conversion of audiovideo files in format suitable for web playback, e.g. jellyfin
|
|
||||||
|
|
||||||
Files found under PATHS will be converted according to parameters.
|
|
||||||
Filename extensions will be changed appropriately.
|
|
||||||
Suffices will we appended to filename in case of multiple created files
|
|
||||||
or if the filename has not changed."""
|
|
||||||
|
|
||||||
startTime = time.perf_counter()
|
|
||||||
|
|
||||||
context = ctx.obj
|
|
||||||
|
|
||||||
context['video_encoder'] = VideoEncoder.fromLabel(video_encoder)
|
|
||||||
|
|
||||||
#HINT: quick and dirty override for h264, todo improve
|
|
||||||
targetFormat = '' if context['video_encoder'] == VideoEncoder.H264 else FfxController.DEFAULT_FILE_FORMAT
|
|
||||||
targetExtension = 'mkv' if context['video_encoder'] == VideoEncoder.H264 else FfxController.DEFAULT_FILE_EXTENSION
|
|
||||||
|
|
||||||
context['use_tmdb'] = not no_tmdb
|
|
||||||
context['use_pattern'] = not no_pattern
|
|
||||||
context['no_prompt'] = no_prompt
|
|
||||||
context['no_signature'] = no_signature
|
|
||||||
context['keep_mkvmerge_metadata'] = keep_mkvmerge_metadata
|
|
||||||
|
|
||||||
|
|
||||||
context['resource_limits'] = {}
|
|
||||||
context['resource_limits']['niceness'] = nice
|
|
||||||
context['resource_limits']['cpu_percent'] = cpu
|
|
||||||
|
|
||||||
|
|
||||||
context['import_subtitles'] = (subtitle_directory and subtitle_prefix)
|
|
||||||
if context['import_subtitles']:
|
|
||||||
context['subtitle_directory'] = subtitle_directory
|
|
||||||
context['subtitle_prefix'] = subtitle_prefix
|
|
||||||
|
|
||||||
|
|
||||||
existingSourcePaths = [p for p in paths if os.path.isfile(p) and p.split('.')[-1] in FfxController.INPUT_FILE_EXTENSIONS]
|
|
||||||
|
|
||||||
|
|
||||||
# CLI Overrides
|
|
||||||
|
|
||||||
cliOverrides = {}
|
|
||||||
|
|
||||||
if language:
|
|
||||||
cliOverrides['languages'] = {}
|
|
||||||
for overLang in language:
|
|
||||||
olTokens = overLang.split(':')
|
|
||||||
if len(olTokens) == 2:
|
|
||||||
try:
|
|
||||||
cliOverrides['languages'][int(olTokens[0])] = olTokens[1]
|
|
||||||
except ValueError:
|
|
||||||
ctx.obj['logger'].warning(f"Ignoring non-integer language index {olTokens[0]}")
|
|
||||||
continue
|
|
||||||
|
|
||||||
if title:
|
|
||||||
cliOverrides['titles'] = {}
|
|
||||||
for overTitle in title:
|
|
||||||
otTokens = overTitle.split(':')
|
|
||||||
if len(otTokens) == 2:
|
|
||||||
try:
|
|
||||||
cliOverrides['titles'][int(otTokens[0])] = otTokens[1]
|
|
||||||
except ValueError:
|
|
||||||
ctx.obj['logger'].warning(f"Ignoring non-integer title index {otTokens[0]}")
|
|
||||||
continue
|
|
||||||
|
|
||||||
if default_video != -1:
|
|
||||||
cliOverrides['default_video'] = default_video
|
|
||||||
if forced_video != -1:
|
|
||||||
cliOverrides['forced_video'] = forced_video
|
|
||||||
if default_audio != -1:
|
|
||||||
cliOverrides['default_audio'] = default_audio
|
|
||||||
if forced_audio != -1:
|
|
||||||
cliOverrides['forced_audio'] = forced_audio
|
|
||||||
if default_subtitle != -1:
|
|
||||||
cliOverrides['default_subtitle'] = default_subtitle
|
|
||||||
if forced_subtitle != -1:
|
|
||||||
cliOverrides['forced_subtitle'] = forced_subtitle
|
|
||||||
|
|
||||||
if show != -1 or season != -1 or episode != -1:
|
|
||||||
if len(existingSourcePaths) > 1:
|
|
||||||
context['logger'].warning(f"Ignoring TMDB show, season, episode overrides, not supported for multiple source files")
|
|
||||||
else:
|
|
||||||
cliOverrides['tmdb'] = {}
|
|
||||||
if show != -1:
|
|
||||||
cliOverrides['tmdb']['show'] = show
|
|
||||||
if season != -1:
|
|
||||||
cliOverrides['tmdb']['season'] = season
|
|
||||||
if episode != -1:
|
|
||||||
cliOverrides['tmdb']['episode'] = episode
|
|
||||||
|
|
||||||
if cliOverrides:
|
|
||||||
context['overrides'] = cliOverrides
|
|
||||||
|
|
||||||
|
|
||||||
if rearrange_streams:
|
|
||||||
try:
|
|
||||||
cliOverrides['stream_order'] = [int(si) for si in rearrange_streams.split(",")]
|
|
||||||
except ValueError as ve:
|
|
||||||
errorMessage = "Non-integer in rearrange stream parameter"
|
|
||||||
ctx.obj['logger'].error(errorMessage)
|
|
||||||
raise click.Abort()
|
|
||||||
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"\nVideo encoder: {video_encoder}")
|
|
||||||
|
|
||||||
qualityTokens = quality.split(',')
|
|
||||||
q_list = [q for q in qualityTokens if q.isnumeric()]
|
|
||||||
ctx.obj['logger'].debug(f"Qualities: {q_list}")
|
|
||||||
|
|
||||||
presetTokens = preset.split(',')
|
|
||||||
p_list = [p for p in presetTokens if p.isnumeric()]
|
|
||||||
ctx.obj['logger'].debug(f"Presets: {p_list}")
|
|
||||||
|
|
||||||
|
|
||||||
context['bitrates'] = {}
|
|
||||||
context['bitrates']['stereo'] = str(stereo_bitrate) if str(stereo_bitrate).endswith('k') else f"{stereo_bitrate}k"
|
|
||||||
context['bitrates']['ac3'] = str(ac3) if str(ac3).endswith('k') else f"{ac3}k"
|
|
||||||
context['bitrates']['dts'] = str(dts) if str(dts).endswith('k') else f"{dts}k"
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"Stereo bitrate: {context['bitrates']['stereo']}")
|
|
||||||
ctx.obj['logger'].debug(f"AC3 bitrate: {context['bitrates']['ac3']}")
|
|
||||||
ctx.obj['logger'].debug(f"DTS bitrate: {context['bitrates']['dts']}")
|
|
||||||
|
|
||||||
#->
|
|
||||||
# Process cut parameters
|
|
||||||
context['perform_cut'] = (cut != 'none')
|
|
||||||
if context['perform_cut']:
|
|
||||||
cutTokens = cut.split(',')
|
|
||||||
if cutTokens and len(cutTokens) == 2:
|
|
||||||
context['cut_start'] = int(cutTokens[0])
|
|
||||||
context['cut_length'] = int(cutTokens[1])
|
|
||||||
ctx.obj['logger'].debug(f"Cut start={context['cut_start']} length={context['cut_length']}")
|
|
||||||
|
|
||||||
|
|
||||||
tc = TmdbController() if context['use_tmdb'] else None
|
|
||||||
|
|
||||||
qualityKwargs = {QualityFilter.QUALITY_KEY: str(QualityFilter.DEFAULT_H264_QUALITY if (context['video_encoder'] == VideoEncoder.H264 and not quality) else quality)}
|
|
||||||
qf = QualityFilter(**qualityKwargs)
|
|
||||||
|
|
||||||
if context['video_encoder'] == VideoEncoder.AV1 and preset:
|
|
||||||
presetKwargs = {PresetFilter.PRESET_KEY: preset}
|
|
||||||
PresetFilter(**presetKwargs)
|
|
||||||
|
|
||||||
cf = None
|
|
||||||
# if crop != 'none':
|
|
||||||
if crop == 'auto':
|
|
||||||
cropKwargs = {}
|
|
||||||
cf = CropFilter(**cropKwargs)
|
|
||||||
|
|
||||||
denoiseKwargs = {}
|
|
||||||
if denoise_strength:
|
|
||||||
denoiseKwargs[NlmeansFilter.STRENGTH_KEY] = denoise_strength
|
|
||||||
if denoise_patch_size:
|
|
||||||
denoiseKwargs[NlmeansFilter.PATCH_SIZE_KEY] = denoise_patch_size
|
|
||||||
if denoise_chroma_patch_size:
|
|
||||||
denoiseKwargs[NlmeansFilter.CHROMA_PATCH_SIZE_KEY] = denoise_chroma_patch_size
|
|
||||||
if denoise_research_window:
|
|
||||||
denoiseKwargs[NlmeansFilter.RESEARCH_WINDOW_KEY] = denoise_research_window
|
|
||||||
if denoise_chroma_research_window:
|
|
||||||
denoiseKwargs[NlmeansFilter.CHROMA_RESEARCH_WINDOW_KEY] = denoise_chroma_research_window
|
|
||||||
if denoise != 'none' or denoiseKwargs:
|
|
||||||
NlmeansFilter(**denoiseKwargs)
|
|
||||||
|
|
||||||
chainYield = list(qf.getChainYield())
|
|
||||||
|
|
||||||
ctx.obj['logger'].info(f"\nRunning {len(existingSourcePaths) * len(chainYield)} jobs")
|
|
||||||
|
|
||||||
jobIndex = 0
|
|
||||||
|
|
||||||
for sourcePath in existingSourcePaths:
|
|
||||||
|
|
||||||
# Separate basedir, basename and extension for current source file
|
|
||||||
sourceDirectory = os.path.dirname(sourcePath)
|
|
||||||
sourceFilename = os.path.basename(sourcePath)
|
|
||||||
sourcePathTokens = sourceFilename.split('.')
|
|
||||||
|
|
||||||
sourceFileBasename = '.'.join(sourcePathTokens[:-1])
|
|
||||||
sourceFilenameExtension = sourcePathTokens[-1]
|
|
||||||
|
|
||||||
ctx.obj['logger'].info(f"\nProcessing file {sourcePath}")
|
|
||||||
|
|
||||||
targetSuffices = {}
|
|
||||||
|
|
||||||
mediaFileProperties = FileProperties(context, sourcePath)
|
|
||||||
|
|
||||||
|
|
||||||
# if not cf is None:
|
|
||||||
#
|
|
||||||
cropArguments = {} if cf is None else mediaFileProperties.findCropArguments()
|
|
||||||
#
|
|
||||||
# ctx.obj['logger'].info(f"\nSetting crop arguments: ouput width: {cropArguments[CropFilter.OUTPUT_WIDTH_KEY]} "
|
|
||||||
# + f"height: {cropArguments[CropFilter.OUTPUT_HEIGHT_KEY]} "
|
|
||||||
# + f"offset x: {cropArguments[CropFilter.OFFSET_X_KEY]} "
|
|
||||||
# + f"y: {cropArguments[CropFilter.OFFSET_Y_KEY]}")
|
|
||||||
#
|
|
||||||
# cf.setArguments(**cropArguments)
|
|
||||||
|
|
||||||
|
|
||||||
ssc = ShiftedSeasonController(context)
|
|
||||||
|
|
||||||
showId = mediaFileProperties.getShowId()
|
|
||||||
|
|
||||||
#HINT: -1 if not set
|
|
||||||
if 'tmdb' in cliOverrides.keys() and 'season' in cliOverrides['tmdb']:
|
|
||||||
showSeason = cliOverrides['tmdb']['season']
|
|
||||||
else:
|
|
||||||
showSeason = mediaFileProperties.getSeason()
|
|
||||||
|
|
||||||
if 'tmdb' in cliOverrides.keys() and 'episode' in cliOverrides['tmdb']:
|
|
||||||
showEpisode = cliOverrides['tmdb']['episode']
|
|
||||||
else:
|
|
||||||
showEpisode = mediaFileProperties.getEpisode()
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"Season={showSeason} Episode={showEpisode}")
|
|
||||||
|
|
||||||
|
|
||||||
sourceMediaDescriptor = mediaFileProperties.getMediaDescriptor()
|
|
||||||
|
|
||||||
#HINT: This is None if the filename did not match anything in database
|
|
||||||
currentPattern = mediaFileProperties.getPattern() if context['use_pattern'] else None
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"Pattern matching: {'No' if currentPattern is None else 'Yes'}")
|
|
||||||
|
|
||||||
# Setup FfxController accordingly depending on pattern matching is enabled and a pattern was matched
|
|
||||||
if currentPattern is None:
|
|
||||||
|
|
||||||
checkUniqueDispositions(context, sourceMediaDescriptor)
|
|
||||||
currentShowDescriptor = None
|
|
||||||
|
|
||||||
if context['import_subtitles']:
|
|
||||||
sourceMediaDescriptor.importSubtitles(context['subtitle_directory'],
|
|
||||||
context['subtitle_prefix'],
|
|
||||||
showSeason,
|
|
||||||
showEpisode)
|
|
||||||
|
|
||||||
if cliOverrides:
|
|
||||||
sourceMediaDescriptor.applyOverrides(cliOverrides)
|
|
||||||
|
|
||||||
fc = FfxController(context, sourceMediaDescriptor)
|
|
||||||
|
|
||||||
else:
|
|
||||||
targetMediaDescriptor = currentPattern.getMediaDescriptor(ctx.obj)
|
|
||||||
checkUniqueDispositions(context, targetMediaDescriptor)
|
|
||||||
currentShowDescriptor = currentPattern.getShowDescriptor(ctx.obj)
|
|
||||||
|
|
||||||
|
|
||||||
# Check if source and target track descriptors match
|
|
||||||
sourceTrackDescriptorList = sourceMediaDescriptor.getTrackDescriptors()
|
|
||||||
targetTrackDescriptorList = targetMediaDescriptor.getTrackDescriptors()
|
|
||||||
|
|
||||||
for ttd in targetTrackDescriptorList:
|
|
||||||
|
|
||||||
tti = ttd.getIndex()
|
|
||||||
ttsi = ttd.getSourceIndex()
|
|
||||||
|
|
||||||
stList = [st for st in sourceTrackDescriptorList if st.getIndex() == ttsi]
|
|
||||||
std = stList[0] if stList else None
|
|
||||||
|
|
||||||
if std is None:
|
|
||||||
raise click.ClickException(f"Target track #{tti} refering to non-existent source track #{ttsi}")
|
|
||||||
|
|
||||||
ttType = ttd.getType()
|
|
||||||
stType = std.getType()
|
|
||||||
|
|
||||||
if ttType != stType:
|
|
||||||
raise click.ClickException(f"Target track #{tti} type ({ttType.label()}) not matching source track #{ttsi} type ({stType.label()})")
|
|
||||||
|
|
||||||
|
|
||||||
if context['import_subtitles']:
|
|
||||||
targetMediaDescriptor.importSubtitles(context['subtitle_directory'],
|
|
||||||
context['subtitle_prefix'],
|
|
||||||
showSeason,
|
|
||||||
showEpisode)
|
|
||||||
|
|
||||||
# ctx.obj['logger'].debug(f"tmd subindices: {[t.getIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getSubIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getDispositionFlag(TrackDisposition.DEFAULT) for t in targetMediaDescriptor.getAllTrackDescriptors()]}")
|
|
||||||
ctx.obj['logger'].debug(f"tmd subindices: {[t.getIndex() for t in targetMediaDescriptor.getTrackDescriptors()]} {[t.getSubIndex() for t in targetMediaDescriptor.getTrackDescriptors()]} {[t.getDispositionFlag(TrackDisposition.DEFAULT) for t in targetMediaDescriptor.getTrackDescriptors()]}")
|
|
||||||
|
|
||||||
if cliOverrides:
|
|
||||||
targetMediaDescriptor.applyOverrides(cliOverrides)
|
|
||||||
|
|
||||||
# ctx.obj['logger'].debug(f"tmd subindices: {[t.getIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getSubIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getDispositionFlag(TrackDisposition.DEFAULT) for t in targetMediaDescriptor.getAllTrackDescriptors()]}")
|
|
||||||
ctx.obj['logger'].debug(f"tmd subindices: {[t.getIndex() for t in targetMediaDescriptor.getTrackDescriptors()]} {[t.getSubIndex() for t in targetMediaDescriptor.getTrackDescriptors()]} {[t.getDispositionFlag(TrackDisposition.DEFAULT) for t in targetMediaDescriptor.getTrackDescriptors()]}")
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"Input mapping tokens (2nd pass): {targetMediaDescriptor.getInputMappingTokens()}")
|
|
||||||
|
|
||||||
fc = FfxController(context, targetMediaDescriptor, sourceMediaDescriptor)
|
|
||||||
|
|
||||||
|
|
||||||
indexSeasonDigits = currentShowDescriptor.getIndexSeasonDigits() if not currentPattern is None else ShowDescriptor.DEFAULT_INDEX_SEASON_DIGITS
|
|
||||||
indexEpisodeDigits = currentShowDescriptor.getIndexEpisodeDigits() if not currentPattern is None else ShowDescriptor.DEFAULT_INDEX_EPISODE_DIGITS
|
|
||||||
indicatorSeasonDigits = currentShowDescriptor.getIndicatorSeasonDigits() if not currentPattern is None else ShowDescriptor.DEFAULT_INDICATOR_SEASON_DIGITS
|
|
||||||
indicatorEpisodeDigits = currentShowDescriptor.getIndicatorEpisodeDigits() if not currentPattern is None else ShowDescriptor.DEFAULT_INDICATOR_EPISODE_DIGITS
|
|
||||||
|
|
||||||
|
|
||||||
# Shift season and episode if defined for this show
|
|
||||||
if ('tmdb' not in cliOverrides.keys() and showId != -1
|
|
||||||
and showSeason != -1 and showEpisode != -1):
|
|
||||||
shiftedShowSeason, shiftedShowEpisode = ssc.shiftSeason(showId,
|
|
||||||
season=showSeason,
|
|
||||||
episode=showEpisode)
|
|
||||||
else:
|
|
||||||
shiftedShowSeason = showSeason
|
|
||||||
shiftedShowEpisode = showEpisode
|
|
||||||
|
|
||||||
# Assemble target filename accordingly depending on TMDB lookup is enabled
|
|
||||||
#HINT: -1 if not set
|
|
||||||
showId = cliOverrides['tmdb']['show'] if 'tmdb' in cliOverrides.keys() and 'show' in cliOverrides['tmdb'] else (-1 if currentShowDescriptor is None else currentShowDescriptor.getId())
|
|
||||||
|
|
||||||
if context['use_tmdb'] and showId != -1 and shiftedShowSeason != -1 and shiftedShowEpisode != -1:
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"Querying TMDB for show_id={showId} season={shiftedShowSeason} episode{shiftedShowEpisode}")
|
|
||||||
|
|
||||||
if currentPattern is None:
|
|
||||||
sName, showYear = tc.getShowNameAndYear(showId)
|
|
||||||
showName = filterFilename(sName)
|
|
||||||
showFilenamePrefix = f"{showName} ({str(showYear)})"
|
|
||||||
else:
|
|
||||||
showFilenamePrefix = currentShowDescriptor.getFilenamePrefix()
|
|
||||||
|
|
||||||
tmdbEpisodeResult = tc.queryEpisode(showId, shiftedShowSeason, shiftedShowEpisode)
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"tmdbEpisodeResult={tmdbEpisodeResult}")
|
|
||||||
|
|
||||||
if tmdbEpisodeResult:
|
|
||||||
substitutedEpisodeName = filterFilename(substituteTmdbFilename(tmdbEpisodeResult['name']))
|
|
||||||
sourceFileBasename = getEpisodeFileBasename(showFilenamePrefix,
|
|
||||||
substitutedEpisodeName,
|
|
||||||
shiftedShowSeason,
|
|
||||||
shiftedShowEpisode,
|
|
||||||
indexSeasonDigits,
|
|
||||||
indexEpisodeDigits,
|
|
||||||
indicatorSeasonDigits,
|
|
||||||
indicatorEpisodeDigits,
|
|
||||||
context=ctx.obj)
|
|
||||||
|
|
||||||
if label:
|
|
||||||
if shiftedShowSeason > -1 and shiftedShowEpisode > -1:
|
|
||||||
targetSuffices['se'] = f"S{shiftedShowSeason:0{indicatorSeasonDigits}d}E{shiftedShowEpisode:0{indicatorEpisodeDigits}d}"
|
|
||||||
elif shiftedShowEpisode > -1:
|
|
||||||
targetSuffices['se'] = f"E{shiftedShowEpisode:0{indicatorEpisodeDigits}d}"
|
|
||||||
else:
|
|
||||||
if 'se' in targetSuffices.keys():
|
|
||||||
del targetSuffices['se']
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"fileBasename={sourceFileBasename}")
|
|
||||||
|
|
||||||
|
|
||||||
for chainIteration in chainYield:
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"\nchain iteration: {chainIteration}\n")
|
|
||||||
|
|
||||||
chainVariant = '-'.join([fy['variant'] for fy in chainIteration])
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"\nRunning job {jobIndex} file={sourcePath} variant={chainVariant}")
|
|
||||||
jobIndex += 1
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"label={label if label else 'Falsy'}")
|
|
||||||
ctx.obj['logger'].debug(f"sourceFileBasename={sourceFileBasename}")
|
|
||||||
|
|
||||||
targetFileBasename = sourceFileBasename if context['use_tmdb'] and not label else label
|
|
||||||
|
|
||||||
targetFilenameTokens = [targetFileBasename]
|
|
||||||
|
|
||||||
if 'se' in targetSuffices.keys():
|
|
||||||
targetFilenameTokens += [targetSuffices['se']]
|
|
||||||
|
|
||||||
for filterYield in chainIteration:
|
|
||||||
targetFilenameTokens += filterYield['suffices']
|
|
||||||
|
|
||||||
targetFilename = f"{'_'.join(targetFilenameTokens)}.{sourceFilenameExtension if rename_only else targetExtension}"
|
|
||||||
|
|
||||||
if sourceFilename == targetFilename:
|
|
||||||
targetFilename = f"out_{targetFilename}"
|
|
||||||
|
|
||||||
|
|
||||||
targetPath = os.path.join(output_directory, targetFilename) if output_directory else targetFilename
|
|
||||||
|
|
||||||
ctx.obj['logger'].info(f"Creating file {targetFilename}")
|
|
||||||
|
|
||||||
if rename_only:
|
|
||||||
shutil.copyfile(sourcePath, targetPath)
|
|
||||||
else:
|
|
||||||
fc.runJob(sourcePath,
|
|
||||||
targetPath,
|
|
||||||
targetFormat,
|
|
||||||
context['video_encoder'],
|
|
||||||
chainIteration,
|
|
||||||
cropArguments)
|
|
||||||
|
|
||||||
endTime = time.perf_counter()
|
|
||||||
ctx.obj['logger'].info(f"\nDONE\nTime elapsed {endTime - startTime}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
ffx()
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
from textual.app import App
|
|
||||||
|
|
||||||
from .shows_screen import ShowsScreen
|
|
||||||
from .media_details_screen import MediaDetailsScreen
|
|
||||||
|
|
||||||
|
|
||||||
class FfxApp(App):
|
|
||||||
|
|
||||||
TITLE = "FFX"
|
|
||||||
|
|
||||||
BINDINGS = [
|
|
||||||
("q", "quit()", "Quit"),
|
|
||||||
("h", "switch_mode('help')", "Help"),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = {}):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
# Data 'input' variable
|
|
||||||
self.context = context
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self) -> None:
|
|
||||||
|
|
||||||
if 'command' in self.context.keys():
|
|
||||||
|
|
||||||
if self.context['command'] == 'shows':
|
|
||||||
self.push_screen(ShowsScreen())
|
|
||||||
|
|
||||||
if self.context['command'] == 'inspect':
|
|
||||||
self.push_screen(MediaDetailsScreen())
|
|
||||||
|
|
||||||
|
|
||||||
def getContext(self):
|
|
||||||
"""Data 'output' method"""
|
|
||||||
return self.context
|
|
||||||
|
|
||||||
@@ -1,360 +0,0 @@
|
|||||||
import os, click
|
|
||||||
|
|
||||||
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.audio_layout import AudioLayout
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
from ffx.video_encoder import VideoEncoder
|
|
||||||
from ffx.process import executeProcess
|
|
||||||
|
|
||||||
from ffx.constants import DEFAULT_cut_start, DEFAULT_cut_length
|
|
||||||
|
|
||||||
from ffx.filter.quality_filter import QualityFilter
|
|
||||||
from ffx.filter.preset_filter import PresetFilter
|
|
||||||
from ffx.filter.crop_filter import CropFilter
|
|
||||||
|
|
||||||
|
|
||||||
class FfxController():
|
|
||||||
|
|
||||||
COMMAND_TOKENS = ['ffmpeg', '-y']
|
|
||||||
NULL_TOKENS = ['-f', 'null', '/dev/null'] # -f null /dev/null
|
|
||||||
|
|
||||||
TEMP_FILE_NAME = "ffmpeg2pass-0.log"
|
|
||||||
|
|
||||||
DEFAULT_VIDEO_ENCODER = VideoEncoder.VP9.label()
|
|
||||||
|
|
||||||
DEFAULT_FILE_FORMAT = 'webm'
|
|
||||||
DEFAULT_FILE_EXTENSION = 'webm'
|
|
||||||
|
|
||||||
INPUT_FILE_EXTENSIONS = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
|
||||||
|
|
||||||
CHANNEL_MAP_5_1 = 'FL-FL|FR-FR|FC-FC|LFE-LFE|SL-BL|SR-BR:5.1'
|
|
||||||
|
|
||||||
# SIGNATURE_TAGS = {'RECODED_WITH': 'FFX'}
|
|
||||||
|
|
||||||
def __init__(self,
|
|
||||||
context : dict,
|
|
||||||
targetMediaDescriptor : MediaDescriptor,
|
|
||||||
sourceMediaDescriptor : MediaDescriptor = None):
|
|
||||||
|
|
||||||
self.__context = context
|
|
||||||
|
|
||||||
self.__targetMediaDescriptor = targetMediaDescriptor
|
|
||||||
self.__mdcs = MediaDescriptorChangeSet(context,
|
|
||||||
targetMediaDescriptor,
|
|
||||||
sourceMediaDescriptor)
|
|
||||||
|
|
||||||
self.__logger = context['logger']
|
|
||||||
|
|
||||||
|
|
||||||
def generateAV1Tokens(self, quality, preset, subIndex : int = 0):
|
|
||||||
|
|
||||||
return [f"-c:v:{int(subIndex)}", 'libsvtav1',
|
|
||||||
'-svtav1-params', f"crf={quality}:preset={preset}:tune=0:enable-overlays=1:scd=1:scm=0",
|
|
||||||
'-pix_fmt', 'yuv420p10le']
|
|
||||||
|
|
||||||
|
|
||||||
# -c:v libx264 -preset slow -crf 17
|
|
||||||
def generateH264Tokens(self, quality, subIndex : int = 0):
|
|
||||||
|
|
||||||
return [f"-c:v:{int(subIndex)}", 'libx264',
|
|
||||||
"-preset", "slow",
|
|
||||||
'-crf', str(quality)]
|
|
||||||
|
|
||||||
|
|
||||||
# -c:v:0 libvpx-vp9 -row-mt 1 -crf 32 -pass 1 -speed 4 -frame-parallel 0 -g 9999 -aq-mode 0
|
|
||||||
def generateVP9Pass1Tokens(self, quality, subIndex : int = 0):
|
|
||||||
|
|
||||||
return [f"-c:v:{int(subIndex)}",
|
|
||||||
'libvpx-vp9',
|
|
||||||
'-row-mt', '1',
|
|
||||||
'-crf', str(quality),
|
|
||||||
'-pass', '1',
|
|
||||||
'-speed', '4',
|
|
||||||
'-frame-parallel', '0',
|
|
||||||
'-g', '9999',
|
|
||||||
'-aq-mode', '0']
|
|
||||||
|
|
||||||
# -c:v:0 libvpx-vp9 -row-mt 1 -crf 32 -pass 2 -frame-parallel 0 -g 9999 -aq-mode 0 -auto-alt-ref 1 -lag-in-frames 25
|
|
||||||
def generateVP9Pass2Tokens(self, quality, subIndex : int = 0):
|
|
||||||
|
|
||||||
return [f"-c:v:{int(subIndex)}",
|
|
||||||
'libvpx-vp9',
|
|
||||||
'-row-mt', '1',
|
|
||||||
'-crf', str(quality),
|
|
||||||
'-pass', '2',
|
|
||||||
'-frame-parallel', '0',
|
|
||||||
'-g', '9999',
|
|
||||||
'-aq-mode', '0',
|
|
||||||
'-auto-alt-ref', '1',
|
|
||||||
'-lag-in-frames', '25']
|
|
||||||
|
|
||||||
def generateVideoCopyTokens(self, subIndex):
|
|
||||||
return [f"-c:v:{int(subIndex)}",
|
|
||||||
'copy']
|
|
||||||
|
|
||||||
|
|
||||||
def generateCropTokens(self):
|
|
||||||
|
|
||||||
if 'cut_start' in self.__context.keys() and 'cut_length' in self.__context.keys():
|
|
||||||
cropStart = int(self.__context['cut_start'])
|
|
||||||
cropLength = int(self.__context['cut_length'])
|
|
||||||
else:
|
|
||||||
cropStart = DEFAULT_cut_start
|
|
||||||
cropLength = DEFAULT_cut_length
|
|
||||||
|
|
||||||
return ['-ss', str(cropStart), '-t', str(cropLength)]
|
|
||||||
|
|
||||||
|
|
||||||
def generateOutputTokens(self, filePathBase, format = '', ext = ''):
|
|
||||||
|
|
||||||
self.__logger.debug(f"FfxController.generateOutputTokens(): base='{filePathBase}' format='{format}' ext='{ext}'")
|
|
||||||
|
|
||||||
outputFilePath = f"{filePathBase}{('.'+str(ext)) if ext else ''}"
|
|
||||||
if format:
|
|
||||||
return ['-f', format, outputFilePath]
|
|
||||||
else:
|
|
||||||
return [outputFilePath]
|
|
||||||
|
|
||||||
|
|
||||||
def generateAudioEncodingTokens(self):
|
|
||||||
"""Generates ffmpeg options audio streams including channel remapping, codec and bitrate"""
|
|
||||||
|
|
||||||
audioTokens = []
|
|
||||||
|
|
||||||
# targetAudioTrackDescriptors = [td for td in self.__targetMediaDescriptor.getAllTrackDescriptors() if td.getType() == TrackType.AUDIO]
|
|
||||||
targetAudioTrackDescriptors = self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.AUDIO)
|
|
||||||
|
|
||||||
trackSubIndex = 0
|
|
||||||
for trackDescriptor in targetAudioTrackDescriptors:
|
|
||||||
|
|
||||||
trackAudioLayout = trackDescriptor.getAudioLayout()
|
|
||||||
|
|
||||||
if trackAudioLayout == AudioLayout.LAYOUT_6_1:
|
|
||||||
audioTokens += [f"-c:a:{trackSubIndex}",
|
|
||||||
'libopus',
|
|
||||||
f"-filter:a:{trackSubIndex}",
|
|
||||||
'channelmap=channel_layout=6.1',
|
|
||||||
f"-b:a:{trackSubIndex}",
|
|
||||||
self.__context['bitrates']['dts']]
|
|
||||||
|
|
||||||
if trackAudioLayout == AudioLayout.LAYOUT_5_1:
|
|
||||||
audioTokens += [f"-c:a:{trackSubIndex}",
|
|
||||||
'libopus',
|
|
||||||
f"-filter:a:{trackSubIndex}",
|
|
||||||
f"channelmap={FfxController.CHANNEL_MAP_5_1}",
|
|
||||||
f"-b:a:{trackSubIndex}",
|
|
||||||
self.__context['bitrates']['ac3']]
|
|
||||||
|
|
||||||
if trackAudioLayout == AudioLayout.LAYOUT_STEREO:
|
|
||||||
audioTokens += [f"-c:a:{trackSubIndex}",
|
|
||||||
'libopus',
|
|
||||||
f"-b:a:{trackSubIndex}",
|
|
||||||
self.__context['bitrates']['stereo']]
|
|
||||||
|
|
||||||
if trackAudioLayout == AudioLayout.LAYOUT_6CH:
|
|
||||||
audioTokens += [f"-c:a:{trackSubIndex}",
|
|
||||||
'libopus',
|
|
||||||
f"-filter:a:{trackSubIndex}",
|
|
||||||
f"channelmap={FfxController.CHANNEL_MAP_5_1}",
|
|
||||||
f"-b:a:{trackSubIndex}",
|
|
||||||
self.__context['bitrates']['ac3']]
|
|
||||||
|
|
||||||
# -ac 5 ?
|
|
||||||
if trackAudioLayout == AudioLayout.LAYOUT_5_0:
|
|
||||||
audioTokens += [f"-c:a:{trackSubIndex}",
|
|
||||||
'libopus',
|
|
||||||
f"-filter:a:{trackSubIndex}",
|
|
||||||
'channelmap=channel_layout=5.0',
|
|
||||||
f"-b:a:{trackSubIndex}",
|
|
||||||
self.__context['bitrates']['ac3']]
|
|
||||||
|
|
||||||
trackSubIndex += 1
|
|
||||||
return audioTokens
|
|
||||||
|
|
||||||
|
|
||||||
def runJob(self,
|
|
||||||
sourcePath,
|
|
||||||
targetPath,
|
|
||||||
targetFormat: str = '',
|
|
||||||
videoEncoder: VideoEncoder = VideoEncoder.VP9,
|
|
||||||
chainIteration: list = [],
|
|
||||||
cropArguments: dict = {}):
|
|
||||||
# quality: int = DEFAULT_QUALITY,
|
|
||||||
# preset: int = DEFAULT_AV1_PRESET):
|
|
||||||
|
|
||||||
qualityFilters = [fy for fy in chainIteration if fy['identifier'] == 'quality']
|
|
||||||
presetFilters = [fy for fy in chainIteration if fy['identifier'] == 'preset']
|
|
||||||
|
|
||||||
cropFilters = [fy for fy in chainIteration if fy['identifier'] == 'crop']
|
|
||||||
denoiseFilters = [fy for fy in chainIteration if fy['identifier'] == 'nlmeans']
|
|
||||||
|
|
||||||
quality = (qualityFilters[0]['parameters']['quality'] if qualityFilters else QualityFilter.DEFAULT_VP9_QUALITY)
|
|
||||||
preset = presetFilters[0]['parameters']['preset'] if presetFilters else PresetFilter.DEFAULT_PRESET
|
|
||||||
|
|
||||||
|
|
||||||
filterParamTokens = []
|
|
||||||
|
|
||||||
if cropArguments:
|
|
||||||
|
|
||||||
cropParams = (f"crop="
|
|
||||||
+ f"{cropArguments[CropFilter.OUTPUT_WIDTH_KEY]}"
|
|
||||||
+ f":{cropArguments[CropFilter.OUTPUT_HEIGHT_KEY]}"
|
|
||||||
+ f":{cropArguments[CropFilter.OFFSET_X_KEY]}"
|
|
||||||
+ f":{cropArguments[CropFilter.OFFSET_Y_KEY]}")
|
|
||||||
|
|
||||||
filterParamTokens.append(cropParams)
|
|
||||||
|
|
||||||
filterParamTokens.extend(denoiseFilters[0]['tokens'] if denoiseFilters else [])
|
|
||||||
|
|
||||||
filterTokens = ['-vf', ', '.join(filterParamTokens)] if filterParamTokens else []
|
|
||||||
|
|
||||||
|
|
||||||
commandTokens = FfxController.COMMAND_TOKENS + ['-i', sourcePath]
|
|
||||||
|
|
||||||
if videoEncoder == VideoEncoder.AV1:
|
|
||||||
|
|
||||||
commandSequence = (commandTokens
|
|
||||||
+ self.__targetMediaDescriptor.getImportFileTokens()
|
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens()
|
|
||||||
+ self.__mdcs.generateDispositionTokens())
|
|
||||||
|
|
||||||
# Optional tokens
|
|
||||||
commandSequence += self.__mdcs.generateMetadataTokens()
|
|
||||||
commandSequence += filterTokens
|
|
||||||
|
|
||||||
for td in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
|
|
||||||
#HINT: Attached thumbnails are not supported by .webm container format
|
|
||||||
if td.getCodec != TrackCodec.PNG:
|
|
||||||
commandSequence += self.generateAV1Tokens(int(quality), int(preset))
|
|
||||||
|
|
||||||
commandSequence += self.generateAudioEncodingTokens()
|
|
||||||
|
|
||||||
if self.__context['perform_cut']:
|
|
||||||
commandSequence += self.generateCropTokens()
|
|
||||||
|
|
||||||
commandSequence += self.generateOutputTokens(targetPath,
|
|
||||||
targetFormat)
|
|
||||||
|
|
||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence")
|
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
|
||||||
executeProcess(commandSequence, context = self.__context)
|
|
||||||
|
|
||||||
|
|
||||||
if videoEncoder == VideoEncoder.H264:
|
|
||||||
|
|
||||||
commandSequence = (commandTokens
|
|
||||||
+ self.__targetMediaDescriptor.getImportFileTokens()
|
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens()
|
|
||||||
+ self.__mdcs.generateDispositionTokens())
|
|
||||||
|
|
||||||
# Optional tokens
|
|
||||||
commandSequence += self.__mdcs.generateMetadataTokens()
|
|
||||||
commandSequence += filterTokens
|
|
||||||
|
|
||||||
for td in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
|
|
||||||
#HINT: Attached thumbnails are not supported by .webm container format
|
|
||||||
if td.getCodec != TrackCodec.PNG:
|
|
||||||
commandSequence += self.generateH264Tokens(int(quality))
|
|
||||||
|
|
||||||
commandSequence += self.generateAudioEncodingTokens()
|
|
||||||
|
|
||||||
if self.__context['perform_cut']:
|
|
||||||
commandSequence += self.generateCropTokens()
|
|
||||||
|
|
||||||
commandSequence += self.generateOutputTokens(targetPath,
|
|
||||||
targetFormat)
|
|
||||||
|
|
||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence")
|
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
|
||||||
executeProcess(commandSequence, context = self.__context)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if videoEncoder == VideoEncoder.VP9:
|
|
||||||
|
|
||||||
commandSequence1 = (commandTokens
|
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens(only_video=True))
|
|
||||||
|
|
||||||
# Optional tokens
|
|
||||||
#NOTE: Filters and so needs to run on the first pass as well, as here
|
|
||||||
# the required bitrate for the second run is determined and recorded
|
|
||||||
# TODO: Results seems to be slightly better with first pass omitted,
|
|
||||||
# Confirm or find better filter settings for 2-pass
|
|
||||||
# commandSequence1 += self.__context['denoiser'].generatefilterTokens()
|
|
||||||
|
|
||||||
for td in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
|
|
||||||
#HINT: Attached thumbnails are not supported by .webm container format
|
|
||||||
if td.getCodec != TrackCodec.PNG:
|
|
||||||
commandSequence1 += self.generateVP9Pass1Tokens(int(quality))
|
|
||||||
|
|
||||||
if self.__context['perform_cut']:
|
|
||||||
commandSequence1 += self.generateCropTokens()
|
|
||||||
|
|
||||||
commandSequence1 += FfxController.NULL_TOKENS
|
|
||||||
|
|
||||||
if os.path.exists(FfxController.TEMP_FILE_NAME):
|
|
||||||
os.remove(FfxController.TEMP_FILE_NAME)
|
|
||||||
|
|
||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence 1")
|
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
|
||||||
executeProcess(commandSequence1, context = self.__context)
|
|
||||||
|
|
||||||
commandSequence2 = (commandTokens
|
|
||||||
+ self.__targetMediaDescriptor.getImportFileTokens()
|
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens()
|
|
||||||
+ self.__mdcs.generateDispositionTokens())
|
|
||||||
|
|
||||||
# Optional tokens
|
|
||||||
commandSequence2 += self.__mdcs.generateMetadataTokens()
|
|
||||||
commandSequence2 += filterTokens
|
|
||||||
|
|
||||||
for td in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
|
|
||||||
#HINT: Attached thumbnails are not supported by .webm container format
|
|
||||||
if td.getCodec != TrackCodec.PNG:
|
|
||||||
commandSequence2 += self.generateVP9Pass2Tokens(int(quality))
|
|
||||||
|
|
||||||
commandSequence2 += self.generateAudioEncodingTokens()
|
|
||||||
|
|
||||||
if self.__context['perform_cut']:
|
|
||||||
commandSequence2 += self.generateCropTokens()
|
|
||||||
|
|
||||||
commandSequence2 += self.generateOutputTokens(targetPath,
|
|
||||||
targetFormat)
|
|
||||||
|
|
||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence 2")
|
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
|
||||||
out, err, rc = executeProcess(commandSequence2, context = self.__context)
|
|
||||||
if rc:
|
|
||||||
raise click.ClickException(f"Command resulted in error: rc={rc} error={err}")
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def createEmptyFile(self,
|
|
||||||
path: str = 'empty.mkv',
|
|
||||||
sizeX: int = 1280,
|
|
||||||
sizeY: int = 720,
|
|
||||||
rate: int = 25,
|
|
||||||
length: int = 10):
|
|
||||||
|
|
||||||
commandTokens = FfxController.COMMAND_TOKENS
|
|
||||||
|
|
||||||
commandTokens += ['-f',
|
|
||||||
'lavfi',
|
|
||||||
'-i',
|
|
||||||
f"color=size={sizeX}x{sizeY}:rate={rate}:color=black",
|
|
||||||
'-f',
|
|
||||||
'lavfi',
|
|
||||||
'-i',
|
|
||||||
'anullsrc=channel_layout=stereo:sample_rate=44100',
|
|
||||||
'-t',
|
|
||||||
str(length),
|
|
||||||
path]
|
|
||||||
|
|
||||||
out, err, rc = executeProcess(commandTokens, context = self.__context)
|
|
||||||
@@ -1,248 +0,0 @@
|
|||||||
import os, re, json
|
|
||||||
|
|
||||||
from .media_descriptor import MediaDescriptor
|
|
||||||
from .pattern_controller import PatternController
|
|
||||||
|
|
||||||
from ffx.filter.crop_filter import CropFilter
|
|
||||||
|
|
||||||
from .process import executeProcess
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
|
|
||||||
|
|
||||||
class FileProperties():
|
|
||||||
|
|
||||||
FILE_EXTENSIONS = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
|
||||||
|
|
||||||
SE_INDICATOR_PATTERN = '([sS][0-9]+[eE][0-9]+)'
|
|
||||||
SEASON_EPISODE_INDICATOR_MATCH = '[sS]([0-9]+)[eE]([0-9]+)'
|
|
||||||
EPISODE_INDICATOR_MATCH = '[eE]([0-9]+)'
|
|
||||||
|
|
||||||
CROPDETECT_PATTERN = 'crop=[0-9]+:[0-9]+:[0-9]+:[0-9]+$'
|
|
||||||
|
|
||||||
DEFAULT_INDEX_DIGITS = 3
|
|
||||||
|
|
||||||
def __init__(self, context, sourcePath):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
|
|
||||||
self.__logger = context['logger']
|
|
||||||
|
|
||||||
# Separate basedir, basename and extension for current source file
|
|
||||||
self.__sourcePath = sourcePath
|
|
||||||
|
|
||||||
self.__sourceDirectory = os.path.dirname(self.__sourcePath)
|
|
||||||
self.__sourceFilename = os.path.basename(self.__sourcePath)
|
|
||||||
|
|
||||||
sourcePathTokens = self.__sourceFilename.split('.')
|
|
||||||
|
|
||||||
if sourcePathTokens[-1] in FileProperties.FILE_EXTENSIONS:
|
|
||||||
self.__sourceFileBasename = '.'.join(sourcePathTokens[:-1])
|
|
||||||
self.__sourceFilenameExtension = sourcePathTokens[-1]
|
|
||||||
else:
|
|
||||||
self.__sourceFileBasename = self.__sourceFilename
|
|
||||||
self.__sourceFilenameExtension = ''
|
|
||||||
|
|
||||||
self.__pc = PatternController(context)
|
|
||||||
|
|
||||||
# Checking if database contains matching pattern
|
|
||||||
matchResult = self.__pc.matchFilename(self.__sourceFilename)
|
|
||||||
|
|
||||||
self.__logger.debug(f"FileProperties.__init__(): Match result: {matchResult}")
|
|
||||||
|
|
||||||
self.__pattern: Pattern = matchResult['pattern'] if matchResult else None
|
|
||||||
|
|
||||||
if matchResult:
|
|
||||||
databaseMatchedGroups = matchResult['match'].groups()
|
|
||||||
self.__logger.debug(f"FileProperties.__init__(): Matched groups: {databaseMatchedGroups}")
|
|
||||||
|
|
||||||
seIndicator = databaseMatchedGroups[0]
|
|
||||||
|
|
||||||
se_match = re.search(FileProperties.SEASON_EPISODE_INDICATOR_MATCH, seIndicator)
|
|
||||||
e_match = re.search(FileProperties.EPISODE_INDICATOR_MATCH, seIndicator)
|
|
||||||
|
|
||||||
else:
|
|
||||||
self.__logger.debug(f"FileProperties.__init__(): Checking file name for indicator {self.__sourceFilename}")
|
|
||||||
|
|
||||||
se_match = re.search(FileProperties.SEASON_EPISODE_INDICATOR_MATCH, self.__sourceFilename)
|
|
||||||
e_match = re.search(FileProperties.EPISODE_INDICATOR_MATCH, self.__sourceFilename)
|
|
||||||
|
|
||||||
if se_match is not None:
|
|
||||||
self.__season = int(se_match.group(1))
|
|
||||||
self.__episode = int(se_match.group(2))
|
|
||||||
elif e_match is not None:
|
|
||||||
self.__season = -1
|
|
||||||
self.__episode = int(e_match.group(1))
|
|
||||||
else:
|
|
||||||
self.__season = -1
|
|
||||||
self.__episode = -1
|
|
||||||
|
|
||||||
|
|
||||||
def getFormatData(self):
|
|
||||||
"""
|
|
||||||
"format": {
|
|
||||||
"filename": "Downloads/nagatoro_s02/nagatoro_s01e02.mkv",
|
|
||||||
"nb_streams": 18,
|
|
||||||
"nb_programs": 0,
|
|
||||||
"nb_stream_groups": 0,
|
|
||||||
"format_name": "matroska,webm",
|
|
||||||
"format_long_name": "Matroska / WebM",
|
|
||||||
"start_time": "0.000000",
|
|
||||||
"duration": "1420.063000",
|
|
||||||
"size": "1489169824",
|
|
||||||
"bit_rate": "8389316",
|
|
||||||
"probe_score": 100,
|
|
||||||
"tags": {
|
|
||||||
"PUBLISHER": "Crunchyroll",
|
|
||||||
"ENCODER": "Lavf58.29.100"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
# ffprobe -hide_banner -show_format -of json
|
|
||||||
ffprobeOutput, ffprobeError, returnCode = executeProcess(["ffprobe",
|
|
||||||
"-hide_banner",
|
|
||||||
"-show_format",
|
|
||||||
"-of", "json",
|
|
||||||
self.__sourcePath]) #,
|
|
||||||
#context = self.context)
|
|
||||||
|
|
||||||
if 'Invalid data found when processing input' in ffprobeError:
|
|
||||||
raise Exception(f"File {self.__sourcePath} does not contain valid stream data")
|
|
||||||
|
|
||||||
if returnCode != 0:
|
|
||||||
raise Exception(f"ffprobe returned with error {returnCode}")
|
|
||||||
|
|
||||||
return json.loads(ffprobeOutput)['format']
|
|
||||||
|
|
||||||
|
|
||||||
def getStreamData(self):
|
|
||||||
"""Returns ffprobe stream data as array with elements according to the following example
|
|
||||||
{
|
|
||||||
"index": 4,
|
|
||||||
"codec_name": "hdmv_pgs_subtitle",
|
|
||||||
"codec_long_name": "HDMV Presentation Graphic Stream subtitles",
|
|
||||||
"codec_type": "subtitle",
|
|
||||||
"codec_tag_string": "[0][0][0][0]",
|
|
||||||
"codec_tag": "0x0000",
|
|
||||||
"r_frame_rate": "0/0",
|
|
||||||
"avg_frame_rate": "0/0",
|
|
||||||
"time_base": "1/1000",
|
|
||||||
"start_pts": 0,
|
|
||||||
"start_time": "0.000000",
|
|
||||||
"duration_ts": 1421035,
|
|
||||||
"duration": "1421.035000",
|
|
||||||
"disposition": {
|
|
||||||
"default": 1,
|
|
||||||
"dub": 0,
|
|
||||||
"original": 0,
|
|
||||||
"comment": 0,
|
|
||||||
"lyrics": 0,
|
|
||||||
"karaoke": 0,
|
|
||||||
"forced": 0,
|
|
||||||
"hearing_impaired": 0,
|
|
||||||
"visual_impaired": 0,
|
|
||||||
"clean_effects": 0,
|
|
||||||
"attached_pic": 0,
|
|
||||||
"timed_thumbnails": 0,
|
|
||||||
"non_diegetic": 0,
|
|
||||||
"captions": 0,
|
|
||||||
"descriptions": 0,
|
|
||||||
"metadata": 0,
|
|
||||||
"dependent": 0,
|
|
||||||
"still_image": 0
|
|
||||||
},
|
|
||||||
"tags": {
|
|
||||||
"language": "ger",
|
|
||||||
"title": "German Full"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
# ffprobe -hide_banner -show_streams -of json
|
|
||||||
ffprobeOutput, ffprobeError, returnCode = executeProcess(["ffprobe",
|
|
||||||
"-hide_banner",
|
|
||||||
"-show_streams",
|
|
||||||
"-of", "json",
|
|
||||||
self.__sourcePath]) #,
|
|
||||||
#context = self.context)
|
|
||||||
|
|
||||||
if 'Invalid data found when processing input' in ffprobeError:
|
|
||||||
raise Exception(f"File {self.__sourcePath} does not contain valid stream data")
|
|
||||||
|
|
||||||
|
|
||||||
if returnCode != 0:
|
|
||||||
raise Exception(f"ffprobe returned with error {returnCode}")
|
|
||||||
|
|
||||||
|
|
||||||
return json.loads(ffprobeOutput)['streams']
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def findCropArguments(self):
|
|
||||||
""""""
|
|
||||||
|
|
||||||
# ffmpeg -i <input.file> -vf cropdetect -f null -
|
|
||||||
ffprobeOutput, ffprobeError, returnCode = executeProcess(["ffmpeg", "-i",
|
|
||||||
self.__sourcePath,
|
|
||||||
"-vf", "cropdetect",
|
|
||||||
"-ss", "60",
|
|
||||||
"-t", "180",
|
|
||||||
"-f", "null", "-"
|
|
||||||
])
|
|
||||||
|
|
||||||
errorLines = ffprobeError.split('\n')
|
|
||||||
|
|
||||||
crops = {}
|
|
||||||
for el in errorLines:
|
|
||||||
|
|
||||||
cropdetect_match = re.search(FileProperties.CROPDETECT_PATTERN, el)
|
|
||||||
|
|
||||||
if cropdetect_match is not None:
|
|
||||||
cropParam = str(cropdetect_match.group(0))
|
|
||||||
|
|
||||||
crops[cropParam] = crops.get(cropParam, 0) + 1
|
|
||||||
|
|
||||||
if crops:
|
|
||||||
cropHistogram = sorted(crops, reverse=True)
|
|
||||||
cropString = cropHistogram[0]
|
|
||||||
|
|
||||||
cropTokens = cropString.split('=')
|
|
||||||
cropValueTokens = cropTokens[1]
|
|
||||||
cropValues = cropValueTokens.split(':')
|
|
||||||
|
|
||||||
return {
|
|
||||||
CropFilter.OUTPUT_WIDTH_KEY: cropValues[0],
|
|
||||||
CropFilter.OUTPUT_HEIGHT_KEY: cropValues[1],
|
|
||||||
CropFilter.OFFSET_X_KEY: cropValues[2],
|
|
||||||
CropFilter.OFFSET_Y_KEY: cropValues[3]
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
|
|
||||||
def getMediaDescriptor(self):
|
|
||||||
return MediaDescriptor.fromFfprobe(self.context, self.getFormatData(), self.getStreamData())
|
|
||||||
|
|
||||||
|
|
||||||
def getShowId(self) -> int:
|
|
||||||
"""Result is -1 if the filename did not match anything in database"""
|
|
||||||
return self.__pattern.getShowId() if self.__pattern is not None else -1
|
|
||||||
|
|
||||||
def getPattern(self) -> Pattern:
|
|
||||||
"""Result is None if the filename did not match anything in database"""
|
|
||||||
return self.__pattern
|
|
||||||
|
|
||||||
|
|
||||||
def getSeason(self) -> int:
|
|
||||||
return int(self.__season)
|
|
||||||
|
|
||||||
def getEpisode(self) -> int:
|
|
||||||
return int(self.__episode)
|
|
||||||
|
|
||||||
|
|
||||||
def getFilename(self):
|
|
||||||
return self.__sourceFilename
|
|
||||||
|
|
||||||
def getFileBasename(self):
|
|
||||||
return self.__sourceFileBasename
|
|
||||||
@@ -1,51 +0,0 @@
|
|||||||
import itertools
|
|
||||||
|
|
||||||
from .filter import Filter
|
|
||||||
|
|
||||||
|
|
||||||
class CropFilter(Filter):
|
|
||||||
|
|
||||||
IDENTIFIER = 'crop'
|
|
||||||
|
|
||||||
OUTPUT_WIDTH_KEY = 'output_width'
|
|
||||||
OUTPUT_HEIGHT_KEY = 'output_height'
|
|
||||||
OFFSET_X_KEY = 'x_offset'
|
|
||||||
OFFSET_Y_KEY = 'y_offset'
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
self.__outputWidth = int(kwargs.get(CropFilter.OUTPUT_WIDTH_KEY, 0))
|
|
||||||
self.__outputHeight = int(kwargs.get(CropFilter.OUTPUT_HEIGHT_KEY, 0))
|
|
||||||
self.__offsetX = int(kwargs.get(CropFilter.OFFSET_X_KEY, 0))
|
|
||||||
self.__offsetY = int(kwargs.get(CropFilter.OFFSET_Y_KEY, 0))
|
|
||||||
|
|
||||||
super().__init__(self)
|
|
||||||
|
|
||||||
def setArguments(self, **kwargs):
|
|
||||||
self.__outputWidth = int(kwargs.get(CropFilter.OUTPUT_WIDTH_KEY))
|
|
||||||
self.__outputHeight = int(kwargs.get(CropFilter.OUTPUT_HEIGHT_KEY))
|
|
||||||
self.__offsetX = int(kwargs.get(CropFilter.OFFSET_X_KEY,))
|
|
||||||
self.__offsetY = int(kwargs.get(CropFilter.OFFSET_Y_KEY,))
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
payload = {'identifier': CropFilter.IDENTIFIER,
|
|
||||||
'parameters': {
|
|
||||||
CropFilter.OUTPUT_WIDTH_KEY: self.__outputWidth,
|
|
||||||
CropFilter.OUTPUT_HEIGHT_KEY: self.__outputHeight,
|
|
||||||
CropFilter.OFFSET_X_KEY: self.__offsetX,
|
|
||||||
CropFilter.OFFSET_Y_KEY: self.__offsetY
|
|
||||||
},
|
|
||||||
'suffices': [],
|
|
||||||
'variant': f"C{self.__outputWidth}-{self.__outputHeight}-{self.__offsetX}-{self.__offsetY}",
|
|
||||||
'tokens': ['crop='
|
|
||||||
+ f"{self.__outputWidth}"
|
|
||||||
+ f":{self.__outputHeight}"
|
|
||||||
+ f":{self.__offsetX}"
|
|
||||||
+ f":{self.__offsetY}"]}
|
|
||||||
|
|
||||||
return payload
|
|
||||||
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
yield self.getPayload()
|
|
||||||
@@ -1,140 +0,0 @@
|
|||||||
import itertools
|
|
||||||
|
|
||||||
from .filter import Filter
|
|
||||||
|
|
||||||
|
|
||||||
class DeinterlaceFilter(Filter):
|
|
||||||
|
|
||||||
IDENTIFIER = 'bwdif'
|
|
||||||
|
|
||||||
# DEFAULT_STRENGTH: float = 2.8
|
|
||||||
# DEFAULT_PATCH_SIZE: int = 13
|
|
||||||
# DEFAULT_CHROMA_PATCH_SIZE: int = 9
|
|
||||||
# DEFAULT_RESEARCH_WINDOW: int = 23
|
|
||||||
# DEFAULT_CHROMA_RESEARCH_WINDOW: int= 17
|
|
||||||
|
|
||||||
# STRENGTH_KEY = 'strength'
|
|
||||||
# PATCH_SIZE_KEY = 'patch_size'
|
|
||||||
# CHROMA_PATCH_SIZE_KEY = 'chroma_patch_size'
|
|
||||||
# RESEARCH_WINDOW_KEY = 'research_window'
|
|
||||||
# CHROMA_RESEARCH_WINDOW_KEY = 'chroma_research_window'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
# self.__useHardware = kwargs.get('use_hardware', False)
|
|
||||||
|
|
||||||
# self.__strengthList = []
|
|
||||||
# strength = kwargs.get(NlmeansFilter.STRENGTH_KEY, '')
|
|
||||||
# if strength:
|
|
||||||
# strengthTokens = strength.split(',')
|
|
||||||
# for st in strengthTokens:
|
|
||||||
# try:
|
|
||||||
# strengthValue = float(st)
|
|
||||||
# except:
|
|
||||||
# raise ValueError('NlmeansFilter: Strength value has to be of type float')
|
|
||||||
# if strengthValue < 1.0 or strengthValue > 30.0:
|
|
||||||
# raise ValueError('NlmeansFilter: Strength value has to be between 1.0 and 30.0')
|
|
||||||
# self.__strengthList.append(strengthValue)
|
|
||||||
# else:
|
|
||||||
# self.__strengthList = [NlmeansFilter.DEFAULT_STRENGTH]
|
|
||||||
|
|
||||||
# self.__patchSizeList = []
|
|
||||||
# patchSize = kwargs.get(NlmeansFilter.PATCH_SIZE_KEY, '')
|
|
||||||
# if patchSize:
|
|
||||||
# patchSizeTokens = patchSize.split(',')
|
|
||||||
# for pst in patchSizeTokens:
|
|
||||||
# try:
|
|
||||||
# patchSizeValue = int(pst)
|
|
||||||
# except:
|
|
||||||
# raise ValueError('NlmeansFilter: Patch size value has to be of type int')
|
|
||||||
# if patchSizeValue < 0 or patchSizeValue > 99:
|
|
||||||
# raise ValueError('NlmeansFilter: Patch size value has to be between 0 and 99')
|
|
||||||
# if patchSizeValue % 2 == 0:
|
|
||||||
# raise ValueError('NlmeansFilter: Patch size value has to an odd number')
|
|
||||||
# self.__patchSizeList.append(patchSizeValue)
|
|
||||||
# else:
|
|
||||||
# self.__patchSizeList = [NlmeansFilter.DEFAULT_PATCH_SIZE]
|
|
||||||
|
|
||||||
# self.__chromaPatchSizeList = []
|
|
||||||
# chromaPatchSize = kwargs.get(NlmeansFilter.CHROMA_PATCH_SIZE_KEY, '')
|
|
||||||
# if chromaPatchSize:
|
|
||||||
# chromaPatchSizeTokens = chromaPatchSize.split(',')
|
|
||||||
# for cpst in chromaPatchSizeTokens:
|
|
||||||
# try:
|
|
||||||
# chromaPatchSizeValue = int(pst)
|
|
||||||
# except:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma patch size value has to be of type int')
|
|
||||||
# if chromaPatchSizeValue < 0 or chromaPatchSizeValue > 99:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma patch value has to be between 0 and 99')
|
|
||||||
# if chromaPatchSizeValue % 2 == 0:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma patch value has to an odd number')
|
|
||||||
# self.__chromaPatchSizeList.append(chromaPatchSizeValue)
|
|
||||||
# else:
|
|
||||||
# self.__chromaPatchSizeList = [NlmeansFilter.DEFAULT_CHROMA_PATCH_SIZE]
|
|
||||||
|
|
||||||
# self.__researchWindowList = []
|
|
||||||
# researchWindow = kwargs.get(NlmeansFilter.RESEARCH_WINDOW_KEY, '')
|
|
||||||
# if researchWindow:
|
|
||||||
# researchWindowTokens = researchWindow.split(',')
|
|
||||||
# for rwt in researchWindowTokens:
|
|
||||||
# try:
|
|
||||||
# researchWindowValue = int(rwt)
|
|
||||||
# except:
|
|
||||||
# raise ValueError('NlmeansFilter: Research window value has to be of type int')
|
|
||||||
# if researchWindowValue < 0 or researchWindowValue > 99:
|
|
||||||
# raise ValueError('NlmeansFilter: Research window value has to be between 0 and 99')
|
|
||||||
# if researchWindowValue % 2 == 0:
|
|
||||||
# raise ValueError('NlmeansFilter: Research window value has to an odd number')
|
|
||||||
# self.__researchWindowList.append(researchWindowValue)
|
|
||||||
# else:
|
|
||||||
# self.__researchWindowList = [NlmeansFilter.DEFAULT_RESEARCH_WINDOW]
|
|
||||||
|
|
||||||
# self.__chromaResearchWindowList = []
|
|
||||||
# chromaResearchWindow = kwargs.get(NlmeansFilter.CHROMA_RESEARCH_WINDOW_KEY, '')
|
|
||||||
# if chromaResearchWindow:
|
|
||||||
# chromaResearchWindowTokens = chromaResearchWindow.split(',')
|
|
||||||
# for crwt in chromaResearchWindowTokens:
|
|
||||||
# try:
|
|
||||||
# chromaResearchWindowValue = int(crwt)
|
|
||||||
# except:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma research window value has to be of type int')
|
|
||||||
# if chromaResearchWindowValue < 0 or chromaResearchWindowValue > 99:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma research window value has to be between 0 and 99')
|
|
||||||
# if chromaResearchWindowValue % 2 == 0:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma research window value has to an odd number')
|
|
||||||
# self.__chromaResearchWindowList.append(chromaResearchWindowValue)
|
|
||||||
# else:
|
|
||||||
# self.__chromaResearchWindowList = [NlmeansFilter.DEFAULT_CHROMA_RESEARCH_WINDOW]
|
|
||||||
|
|
||||||
super().__init__(self)
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
# strength = iteration[0]
|
|
||||||
# patchSize = iteration[1]
|
|
||||||
# chromaPatchSize = iteration[2]
|
|
||||||
# researchWindow = iteration[3]
|
|
||||||
# chromaResearchWindow = iteration[4]
|
|
||||||
|
|
||||||
suffices = []
|
|
||||||
|
|
||||||
# filterName = 'nlmeans_opencl' if self.__useHardware else 'nlmeans'
|
|
||||||
|
|
||||||
payload = {'identifier': DeinterlaceFilter.IDENTIFIER,
|
|
||||||
'parameters': {},
|
|
||||||
'suffices': suffices,
|
|
||||||
'variant': f"DEINT",
|
|
||||||
'tokens': ['bwdif=mode=1']}
|
|
||||||
|
|
||||||
return payload
|
|
||||||
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
# for it in itertools.product(self.__strengthList,
|
|
||||||
# self.__patchSizeList,
|
|
||||||
# self.__chromaPatchSizeList,
|
|
||||||
# self.__researchWindowList,
|
|
||||||
# self.__chromaResearchWindowList):
|
|
||||||
yield self.getPayload()
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
import itertools
|
|
||||||
|
|
||||||
|
|
||||||
class Filter():
|
|
||||||
|
|
||||||
filterChain: list = []
|
|
||||||
|
|
||||||
def __init__(self, filter):
|
|
||||||
|
|
||||||
self.filterChain.append(filter)
|
|
||||||
|
|
||||||
def getFilterChain(self):
|
|
||||||
return self.filterChain
|
|
||||||
|
|
||||||
def getChainYield(self):
|
|
||||||
for fy in itertools.product(*[f.getYield() for f in self.filterChain]):
|
|
||||||
yield fy
|
|
||||||
@@ -1,162 +0,0 @@
|
|||||||
import itertools
|
|
||||||
|
|
||||||
from .filter import Filter
|
|
||||||
|
|
||||||
|
|
||||||
class NlmeansFilter(Filter):
|
|
||||||
|
|
||||||
IDENTIFIER = 'nlmeans'
|
|
||||||
|
|
||||||
DEFAULT_STRENGTH: float = 2.8
|
|
||||||
DEFAULT_PATCH_SIZE: int = 13
|
|
||||||
DEFAULT_CHROMA_PATCH_SIZE: int = 9
|
|
||||||
DEFAULT_RESEARCH_WINDOW: int = 23
|
|
||||||
DEFAULT_CHROMA_RESEARCH_WINDOW: int= 17
|
|
||||||
|
|
||||||
STRENGTH_KEY = 'strength'
|
|
||||||
PATCH_SIZE_KEY = 'patch_size'
|
|
||||||
CHROMA_PATCH_SIZE_KEY = 'chroma_patch_size'
|
|
||||||
RESEARCH_WINDOW_KEY = 'research_window'
|
|
||||||
CHROMA_RESEARCH_WINDOW_KEY = 'chroma_research_window'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
self.__useHardware = kwargs.get('use_hardware', False)
|
|
||||||
|
|
||||||
self.__strengthList = []
|
|
||||||
strength = kwargs.get(NlmeansFilter.STRENGTH_KEY, '')
|
|
||||||
if strength:
|
|
||||||
strengthTokens = strength.split(',')
|
|
||||||
for st in strengthTokens:
|
|
||||||
try:
|
|
||||||
strengthValue = float(st)
|
|
||||||
except:
|
|
||||||
raise ValueError('NlmeansFilter: Strength value has to be of type float')
|
|
||||||
if strengthValue < 1.0 or strengthValue > 30.0:
|
|
||||||
raise ValueError('NlmeansFilter: Strength value has to be between 1.0 and 30.0')
|
|
||||||
self.__strengthList.append(strengthValue)
|
|
||||||
else:
|
|
||||||
self.__strengthList = [NlmeansFilter.DEFAULT_STRENGTH]
|
|
||||||
|
|
||||||
self.__patchSizeList = []
|
|
||||||
patchSize = kwargs.get(NlmeansFilter.PATCH_SIZE_KEY, '')
|
|
||||||
if patchSize:
|
|
||||||
patchSizeTokens = patchSize.split(',')
|
|
||||||
for pst in patchSizeTokens:
|
|
||||||
try:
|
|
||||||
patchSizeValue = int(pst)
|
|
||||||
except:
|
|
||||||
raise ValueError('NlmeansFilter: Patch size value has to be of type int')
|
|
||||||
if patchSizeValue < 0 or patchSizeValue > 99:
|
|
||||||
raise ValueError('NlmeansFilter: Patch size value has to be between 0 and 99')
|
|
||||||
if patchSizeValue % 2 == 0:
|
|
||||||
raise ValueError('NlmeansFilter: Patch size value has to an odd number')
|
|
||||||
self.__patchSizeList.append(patchSizeValue)
|
|
||||||
else:
|
|
||||||
self.__patchSizeList = [NlmeansFilter.DEFAULT_PATCH_SIZE]
|
|
||||||
|
|
||||||
self.__chromaPatchSizeList = []
|
|
||||||
chromaPatchSize = kwargs.get(NlmeansFilter.CHROMA_PATCH_SIZE_KEY, '')
|
|
||||||
if chromaPatchSize:
|
|
||||||
chromaPatchSizeTokens = chromaPatchSize.split(',')
|
|
||||||
for cpst in chromaPatchSizeTokens:
|
|
||||||
try:
|
|
||||||
chromaPatchSizeValue = int(pst)
|
|
||||||
except:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma patch size value has to be of type int')
|
|
||||||
if chromaPatchSizeValue < 0 or chromaPatchSizeValue > 99:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma patch value has to be between 0 and 99')
|
|
||||||
if chromaPatchSizeValue % 2 == 0:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma patch value has to an odd number')
|
|
||||||
self.__chromaPatchSizeList.append(chromaPatchSizeValue)
|
|
||||||
else:
|
|
||||||
self.__chromaPatchSizeList = [NlmeansFilter.DEFAULT_CHROMA_PATCH_SIZE]
|
|
||||||
|
|
||||||
self.__researchWindowList = []
|
|
||||||
researchWindow = kwargs.get(NlmeansFilter.RESEARCH_WINDOW_KEY, '')
|
|
||||||
if researchWindow:
|
|
||||||
researchWindowTokens = researchWindow.split(',')
|
|
||||||
for rwt in researchWindowTokens:
|
|
||||||
try:
|
|
||||||
researchWindowValue = int(rwt)
|
|
||||||
except:
|
|
||||||
raise ValueError('NlmeansFilter: Research window value has to be of type int')
|
|
||||||
if researchWindowValue < 0 or researchWindowValue > 99:
|
|
||||||
raise ValueError('NlmeansFilter: Research window value has to be between 0 and 99')
|
|
||||||
if researchWindowValue % 2 == 0:
|
|
||||||
raise ValueError('NlmeansFilter: Research window value has to an odd number')
|
|
||||||
self.__researchWindowList.append(researchWindowValue)
|
|
||||||
else:
|
|
||||||
self.__researchWindowList = [NlmeansFilter.DEFAULT_RESEARCH_WINDOW]
|
|
||||||
|
|
||||||
self.__chromaResearchWindowList = []
|
|
||||||
chromaResearchWindow = kwargs.get(NlmeansFilter.CHROMA_RESEARCH_WINDOW_KEY, '')
|
|
||||||
if chromaResearchWindow:
|
|
||||||
chromaResearchWindowTokens = chromaResearchWindow.split(',')
|
|
||||||
for crwt in chromaResearchWindowTokens:
|
|
||||||
try:
|
|
||||||
chromaResearchWindowValue = int(crwt)
|
|
||||||
except:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma research window value has to be of type int')
|
|
||||||
if chromaResearchWindowValue < 0 or chromaResearchWindowValue > 99:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma research window value has to be between 0 and 99')
|
|
||||||
if chromaResearchWindowValue % 2 == 0:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma research window value has to an odd number')
|
|
||||||
self.__chromaResearchWindowList.append(chromaResearchWindowValue)
|
|
||||||
else:
|
|
||||||
self.__chromaResearchWindowList = [NlmeansFilter.DEFAULT_CHROMA_RESEARCH_WINDOW]
|
|
||||||
|
|
||||||
super().__init__(self)
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self, iteration):
|
|
||||||
|
|
||||||
strength = iteration[0]
|
|
||||||
patchSize = iteration[1]
|
|
||||||
chromaPatchSize = iteration[2]
|
|
||||||
researchWindow = iteration[3]
|
|
||||||
chromaResearchWindow = iteration[4]
|
|
||||||
|
|
||||||
suffices = []
|
|
||||||
|
|
||||||
if len(self.__strengthList) > 1:
|
|
||||||
suffices += [f"ds{strength}"]
|
|
||||||
if len(self.__patchSizeList) > 1:
|
|
||||||
suffices += [f"dp{patchSize}"]
|
|
||||||
if len(self.__chromaPatchSizeList) > 1:
|
|
||||||
suffices += [f"dpc{chromaPatchSize}"]
|
|
||||||
if len(self.__researchWindowList) > 1:
|
|
||||||
suffices += [f"dr{researchWindow}"]
|
|
||||||
if len(self.__chromaResearchWindowList) > 1:
|
|
||||||
suffices += [f"drc{chromaResearchWindow}"]
|
|
||||||
|
|
||||||
filterName = 'nlmeans_opencl' if self.__useHardware else 'nlmeans'
|
|
||||||
|
|
||||||
payload = {'identifier': NlmeansFilter.IDENTIFIER,
|
|
||||||
'parameters': {
|
|
||||||
'strength': strength,
|
|
||||||
'patch_size': patchSize,
|
|
||||||
'chroma_patch_size': chromaPatchSize,
|
|
||||||
'research_window': researchWindow,
|
|
||||||
'chroma_research_window': chromaResearchWindow
|
|
||||||
},
|
|
||||||
'suffices': suffices,
|
|
||||||
'variant': f"DS{strength}-DP{patchSize}-DPC{chromaPatchSize}"
|
|
||||||
+ f"-DR{researchWindow}-DRC{chromaResearchWindow}",
|
|
||||||
'tokens': [f"{filterName}=s={strength}"
|
|
||||||
+ f":p={patchSize}"
|
|
||||||
+ f":pc={chromaPatchSize}"
|
|
||||||
+ f":r={researchWindow}"
|
|
||||||
+ f":rc={chromaResearchWindow}"]}
|
|
||||||
|
|
||||||
return payload
|
|
||||||
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
for it in itertools.product(self.__strengthList,
|
|
||||||
self.__patchSizeList,
|
|
||||||
self.__chromaPatchSizeList,
|
|
||||||
self.__researchWindowList,
|
|
||||||
self.__chromaResearchWindowList):
|
|
||||||
yield self.getPayload(it)
|
|
||||||
@@ -1,54 +0,0 @@
|
|||||||
import itertools
|
|
||||||
|
|
||||||
from .filter import Filter
|
|
||||||
|
|
||||||
|
|
||||||
class PresetFilter(Filter):
|
|
||||||
|
|
||||||
IDENTIFIER = 'preset'
|
|
||||||
|
|
||||||
DEFAULT_PRESET = 5
|
|
||||||
|
|
||||||
PRESET_KEY = 'preset'
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
self.__presetsList = []
|
|
||||||
presets = str(kwargs.get(PresetFilter.PRESET_KEY, ''))
|
|
||||||
if presets:
|
|
||||||
presetTokens = presets.split(',')
|
|
||||||
for q in presetTokens:
|
|
||||||
try:
|
|
||||||
presetValue = int(q)
|
|
||||||
except:
|
|
||||||
raise ValueError('PresetFilter: Preset value has to be of type int')
|
|
||||||
if presetValue < 0 or presetValue > 13:
|
|
||||||
raise ValueError('PresetFilter: Preset value has to be between 0 and 13')
|
|
||||||
self.__presetsList.append(presetValue)
|
|
||||||
else:
|
|
||||||
self.__presetsList = [PresetFilter.DEFAULT_PRESET]
|
|
||||||
|
|
||||||
super().__init__(self)
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self, preset):
|
|
||||||
|
|
||||||
suffices = []
|
|
||||||
|
|
||||||
if len(self.__presetsList) > 1:
|
|
||||||
suffices += [f"p{preset}"]
|
|
||||||
|
|
||||||
payload = {'identifier': PresetFilter.IDENTIFIER,
|
|
||||||
'parameters': {
|
|
||||||
'preset': preset
|
|
||||||
},
|
|
||||||
'suffices': suffices,
|
|
||||||
'variant': f"P{preset}",
|
|
||||||
'tokens': []}
|
|
||||||
|
|
||||||
return payload
|
|
||||||
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
for q in self.__presetsList:
|
|
||||||
yield self.getPayload(q)
|
|
||||||
@@ -1,55 +0,0 @@
|
|||||||
import itertools
|
|
||||||
|
|
||||||
from .filter import Filter
|
|
||||||
|
|
||||||
|
|
||||||
class QualityFilter(Filter):
|
|
||||||
|
|
||||||
IDENTIFIER = 'quality'
|
|
||||||
|
|
||||||
DEFAULT_VP9_QUALITY = 32
|
|
||||||
DEFAULT_H264_QUALITY = 17
|
|
||||||
|
|
||||||
QUALITY_KEY = 'quality'
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
self.__qualitiesList = []
|
|
||||||
qualities = kwargs.get(QualityFilter.QUALITY_KEY, '')
|
|
||||||
if qualities:
|
|
||||||
qualityTokens = qualities.split(',')
|
|
||||||
for q in qualityTokens:
|
|
||||||
try:
|
|
||||||
qualityValue = int(q)
|
|
||||||
except:
|
|
||||||
raise ValueError('QualityFilter: Quality value has to be of type int')
|
|
||||||
if qualityValue < 0 or qualityValue > 63:
|
|
||||||
raise ValueError('QualityFilter: Quality value has to be between 0 and 63')
|
|
||||||
self.__qualitiesList.append(qualityValue)
|
|
||||||
else:
|
|
||||||
self.__qualitiesList = [QualityFilter.DEFAULT_VP9_QUALITY]
|
|
||||||
|
|
||||||
super().__init__(self)
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self, quality):
|
|
||||||
|
|
||||||
suffices = []
|
|
||||||
|
|
||||||
if len(self.__qualitiesList) > 1:
|
|
||||||
suffices += [f"q{quality}"]
|
|
||||||
|
|
||||||
payload = {'identifier': QualityFilter.IDENTIFIER,
|
|
||||||
'parameters': {
|
|
||||||
'quality': quality
|
|
||||||
},
|
|
||||||
'suffices': suffices,
|
|
||||||
'variant': f"Q{quality}",
|
|
||||||
'tokens': []}
|
|
||||||
|
|
||||||
return payload
|
|
||||||
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
for q in self.__qualitiesList:
|
|
||||||
yield self.getPayload(q)
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
from .filter import Filter
|
|
||||||
|
|
||||||
class ScaleFilter(Filter):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__(self)
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
from textual.app import ComposeResult
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Footer, Placeholder
|
|
||||||
|
|
||||||
class HelpScreen(Screen):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
context = self.app.getContext()
|
|
||||||
|
|
||||||
def compose(self) -> ComposeResult:
|
|
||||||
yield Placeholder("Help Screen")
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
@@ -1,239 +0,0 @@
|
|||||||
import re, logging
|
|
||||||
|
|
||||||
from jinja2 import Environment, Undefined
|
|
||||||
from .constants import DEFAULT_OUTPUT_FILENAME_TEMPLATE
|
|
||||||
from .configuration_controller import ConfigurationController
|
|
||||||
|
|
||||||
|
|
||||||
class EmptyStringUndefined(Undefined):
|
|
||||||
def __str__(self):
|
|
||||||
return ''
|
|
||||||
|
|
||||||
|
|
||||||
DIFF_ADDED_KEY = 'added'
|
|
||||||
DIFF_REMOVED_KEY = 'removed'
|
|
||||||
DIFF_CHANGED_KEY = 'changed'
|
|
||||||
DIFF_UNCHANGED_KEY = 'unchanged'
|
|
||||||
|
|
||||||
RICH_COLOR_PATTERN = '\[[a-z_]+\](.+)\[\/[a-z_]+\]'
|
|
||||||
|
|
||||||
|
|
||||||
def dictDiff(a : dict, b : dict, ignoreKeys: list = [], removeKeys: list = []):
|
|
||||||
"""
|
|
||||||
ignoreKeys: Ignored keys are filtered from calculating diff at all
|
|
||||||
removeKeys: Override diff calculation to remove keys certainly
|
|
||||||
"""
|
|
||||||
|
|
||||||
a_filtered = {k:v for k,v in a.items() if not k in ignoreKeys}
|
|
||||||
b_filtered = {k:v for k,v in b.items() if not k in ignoreKeys and k not in removeKeys}
|
|
||||||
|
|
||||||
a_only = {k:v for k,v in a_filtered.items() if not k in b_filtered.keys()}
|
|
||||||
b_only = {k:v for k,v in b_filtered.items() if not k in a_filtered.keys()}
|
|
||||||
|
|
||||||
a_b = set(a_filtered.keys()) & set(b_filtered.keys())
|
|
||||||
|
|
||||||
changed = {k:b_filtered[k] for k in a_b if a_filtered[k] != b_filtered[k]}
|
|
||||||
unchanged = {k:b_filtered[k] for k in a_b if a_filtered[k] == b_filtered[k]}
|
|
||||||
|
|
||||||
diffResult = {}
|
|
||||||
|
|
||||||
|
|
||||||
if a_only:
|
|
||||||
diffResult[DIFF_REMOVED_KEY] = a_only
|
|
||||||
diffResult[DIFF_UNCHANGED_KEY] = unchanged
|
|
||||||
if b_only:
|
|
||||||
diffResult[DIFF_ADDED_KEY] = b_only
|
|
||||||
if changed:
|
|
||||||
diffResult[DIFF_CHANGED_KEY] = changed
|
|
||||||
|
|
||||||
return diffResult
|
|
||||||
|
|
||||||
|
|
||||||
def dictKeysDiff(a : dict, b : dict):
|
|
||||||
|
|
||||||
a_keys = set(a.keys())
|
|
||||||
b_keys = set(b.keys())
|
|
||||||
|
|
||||||
a_only = a_keys - b_keys
|
|
||||||
b_only = b_keys - a_keys
|
|
||||||
a_b = a_keys & b_keys
|
|
||||||
|
|
||||||
changed = {k for k in a_b if a[k] != b[k]}
|
|
||||||
|
|
||||||
diffResult = {}
|
|
||||||
|
|
||||||
|
|
||||||
if a_only:
|
|
||||||
diffResult[DIFF_REMOVED_KEY] = a_only
|
|
||||||
diffResult[DIFF_UNCHANGED_KEY] = b_keys
|
|
||||||
if b_only:
|
|
||||||
diffResult[DIFF_ADDED_KEY] = b_only
|
|
||||||
if changed:
|
|
||||||
diffResult[DIFF_CHANGED_KEY] = changed
|
|
||||||
|
|
||||||
return diffResult
|
|
||||||
|
|
||||||
|
|
||||||
def dictCache(element: dict, cache: list = []):
|
|
||||||
for index in range(len(cache)):
|
|
||||||
diff = dictKeysDiff(cache[index], element)
|
|
||||||
if not diff:
|
|
||||||
return index, cache
|
|
||||||
cache.append(element)
|
|
||||||
return -1, cache
|
|
||||||
|
|
||||||
|
|
||||||
def setDiff(a : set, b : set) -> set:
|
|
||||||
|
|
||||||
a_only = a - b
|
|
||||||
b_only = b - a
|
|
||||||
a_and_b = a & b
|
|
||||||
|
|
||||||
diffResult = {}
|
|
||||||
|
|
||||||
if a_only:
|
|
||||||
diffResult[DIFF_REMOVED_KEY] = a_only
|
|
||||||
diffResult[DIFF_UNCHANGED_KEY] = a_and_b
|
|
||||||
if b_only:
|
|
||||||
diffResult[DIFF_ADDED_KEY] = b_only
|
|
||||||
|
|
||||||
return diffResult
|
|
||||||
|
|
||||||
|
|
||||||
def permutateList(inputList: list, permutation: list):
|
|
||||||
|
|
||||||
# 0,1,2: ABC
|
|
||||||
# 0,2,1: ACB
|
|
||||||
# 1,2,0: BCA
|
|
||||||
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def filterFilename(fileName: str) -> str:
|
|
||||||
"""This filter replaces charactes from TMDB responses with characters
|
|
||||||
less problemating when using in filenames or removes them"""
|
|
||||||
|
|
||||||
fileName = str(fileName).replace('/', '-')
|
|
||||||
fileName = str(fileName).replace(':', ';')
|
|
||||||
fileName = str(fileName).replace('*', '')
|
|
||||||
fileName = str(fileName).replace("'", '')
|
|
||||||
fileName = str(fileName).replace("?", '#')
|
|
||||||
fileName = str(fileName).replace('♥', '')
|
|
||||||
fileName = str(fileName).replace('’', '')
|
|
||||||
|
|
||||||
return fileName.strip()
|
|
||||||
|
|
||||||
def substituteTmdbFilename(fileName: str) -> str:
|
|
||||||
"""If chaining this method with filterFilename use this one first as the latter will destroy some patterns"""
|
|
||||||
|
|
||||||
# This indicates filler episodes in TMDB episode names
|
|
||||||
fileName = str(fileName).replace(' (*)', '')
|
|
||||||
fileName = str(fileName).replace('(*)', '')
|
|
||||||
|
|
||||||
# This indicates the index of multi-episode files
|
|
||||||
episodePartMatch = re.search("\\(([0-9]+)\\)$", fileName)
|
|
||||||
if episodePartMatch is not None:
|
|
||||||
partSuffix = str(episodePartMatch.group(0))
|
|
||||||
partIndex = episodePartMatch.groups()[0]
|
|
||||||
fileName = str(fileName).replace(partSuffix, f"Teil {partIndex}")
|
|
||||||
|
|
||||||
# Also multi-episodes with first and last episode index
|
|
||||||
episodePartMatch = re.search("\\(([0-9]+)[-\\/]([0-9]+)\\)$", fileName)
|
|
||||||
if episodePartMatch is not None:
|
|
||||||
partSuffix = str(episodePartMatch.group(0))
|
|
||||||
partFirstIndex = episodePartMatch.groups()[0]
|
|
||||||
partLastIndex = episodePartMatch.groups()[1]
|
|
||||||
fileName = str(fileName).replace(partSuffix, f"Teil {partFirstIndex}-{partLastIndex}")
|
|
||||||
|
|
||||||
return fileName
|
|
||||||
|
|
||||||
|
|
||||||
def getEpisodeFileBasename(showName,
|
|
||||||
episodeName,
|
|
||||||
season,
|
|
||||||
episode,
|
|
||||||
indexSeasonDigits = 2,
|
|
||||||
indexEpisodeDigits = 2,
|
|
||||||
indicatorSeasonDigits = 2,
|
|
||||||
indicatorEpisodeDigits = 2,
|
|
||||||
context = None):
|
|
||||||
"""
|
|
||||||
One Piece:
|
|
||||||
indexSeasonDigits = 0,
|
|
||||||
indexEpisodeDigits = 4,
|
|
||||||
indicatorSeasonDigits = 2,
|
|
||||||
indicatorEpisodeDigits = 4
|
|
||||||
|
|
||||||
Three-Body:
|
|
||||||
indexSeasonDigits = 0,
|
|
||||||
indexEpisodeDigits = 2,
|
|
||||||
indicatorSeasonDigits = 2,
|
|
||||||
indicatorEpisodeDigits = 2
|
|
||||||
|
|
||||||
Dragonball:
|
|
||||||
indexSeasonDigits = 0,
|
|
||||||
indexEpisodeDigits = 3,
|
|
||||||
indicatorSeasonDigits = 2,
|
|
||||||
indicatorEpisodeDigits = 3
|
|
||||||
|
|
||||||
Boruto:
|
|
||||||
indexSeasonDigits = 0,
|
|
||||||
indexEpisodeDigits = 4,
|
|
||||||
indicatorSeasonDigits = 2,
|
|
||||||
indicatorEpisodeDigits = 4
|
|
||||||
"""
|
|
||||||
|
|
||||||
cc: ConfigurationController = context['config'] if context is not None and 'config' in context.keys() else None
|
|
||||||
configData = cc.getData() if cc is not None else {}
|
|
||||||
outputFilenameTemplate = configData.get(ConfigurationController.OUTPUT_FILENAME_TEMPLATE_KEY,
|
|
||||||
DEFAULT_OUTPUT_FILENAME_TEMPLATE)
|
|
||||||
|
|
||||||
if context is not None and 'logger' in context.keys():
|
|
||||||
logger = context['logger']
|
|
||||||
else:
|
|
||||||
logger = logging.getLogger('FFX')
|
|
||||||
logger.addHandler(logging.NullHandler())
|
|
||||||
|
|
||||||
|
|
||||||
indexSeparator = ' ' if indexSeasonDigits or indexEpisodeDigits else ''
|
|
||||||
seasonIndex = '{num:{fill}{width}}'.format(num=season, fill='0', width=indexSeasonDigits) if indexSeasonDigits else ''
|
|
||||||
episodeIndex = '{num:{fill}{width}}'.format(num=episode, fill='0', width=indexEpisodeDigits) if indexEpisodeDigits else ''
|
|
||||||
|
|
||||||
indicatorSeparator = ' - ' if indicatorSeasonDigits or indicatorEpisodeDigits else ''
|
|
||||||
seasonIndicator = 'S{num:{fill}{width}}'.format(num=season, fill='0', width=indicatorSeasonDigits) if indicatorSeasonDigits else ''
|
|
||||||
episodeIndicator = 'E{num:{fill}{width}}'.format(num=episode, fill='0', width=indicatorEpisodeDigits) if indicatorEpisodeDigits else ''
|
|
||||||
|
|
||||||
jinjaKwargs = {
|
|
||||||
'ffx_show_name': showName,
|
|
||||||
'ffx_index_separator': indexSeparator,
|
|
||||||
'ffx_season_index': str(seasonIndex),
|
|
||||||
'ffx_episode_index': str(episodeIndex),
|
|
||||||
'ffx_index': str(seasonIndex) + str(episodeIndex),
|
|
||||||
'ffx_episode_name': episodeName,
|
|
||||||
'ffx_indicator_separator': indicatorSeparator,
|
|
||||||
'ffx_season_indicator': str(seasonIndicator),
|
|
||||||
'ffx_episode_indicator': str(episodeIndicator),
|
|
||||||
'ffx_indicator': str(seasonIndicator) + str(episodeIndicator)
|
|
||||||
}
|
|
||||||
|
|
||||||
jinjaEnv = Environment(undefined=EmptyStringUndefined)
|
|
||||||
jinjaTemplate = jinjaEnv.from_string(outputFilenameTemplate)
|
|
||||||
return jinjaTemplate.render(**jinjaKwargs)
|
|
||||||
|
|
||||||
# return ''.join(filenameTokens)
|
|
||||||
|
|
||||||
|
|
||||||
def formatRichColor(text: str, color: str = None):
|
|
||||||
if color is None:
|
|
||||||
return text
|
|
||||||
else:
|
|
||||||
return f"[{color}]{text}[/{color}]"
|
|
||||||
|
|
||||||
def removeRichColor(text: str):
|
|
||||||
richColorMatch = re.search(RICH_COLOR_PATTERN, text)
|
|
||||||
if richColorMatch is None:
|
|
||||||
return text
|
|
||||||
else:
|
|
||||||
return str(richColorMatch.group(1))
|
|
||||||
|
|
||||||
@@ -1,107 +0,0 @@
|
|||||||
from enum import Enum
|
|
||||||
import difflib
|
|
||||||
|
|
||||||
class IsoLanguage(Enum):
|
|
||||||
|
|
||||||
AFRIKAANS = {"name": "Afrikaans", "iso639_1": "af", "iso639_2": ["afr"]}
|
|
||||||
ALBANIAN = {"name": "Albanian", "iso639_1": "sq", "iso639_2": ["alb"]}
|
|
||||||
ARABIC = {"name": "Arabic", "iso639_1": "ar", "iso639_2": ["ara"]}
|
|
||||||
ARMENIAN = {"name": "Armenian", "iso639_1": "hy", "iso639_2": ["arm"]}
|
|
||||||
AZERBAIJANI = {"name": "Azerbaijani", "iso639_1": "az", "iso639_2": ["aze"]}
|
|
||||||
BASQUE = {"name": "Basque", "iso639_1": "eu", "iso639_2": ["baq"]}
|
|
||||||
BELARUSIAN = {"name": "Belarusian", "iso639_1": "be", "iso639_2": ["bel"]}
|
|
||||||
BOKMAL = {"name": "Bokmål", "iso639_1": "nb", "iso639_2": ["nob"]} # Norwegian Bokmål
|
|
||||||
BULGARIAN = {"name": "Bulgarian", "iso639_1": "bg", "iso639_2": ["bul"]}
|
|
||||||
CATALAN = {"name": "Catalan", "iso639_1": "ca", "iso639_2": ["cat"]}
|
|
||||||
CHINESE = {"name": "Chinese", "iso639_1": "zh", "iso639_2": ["zho", "chi"]}
|
|
||||||
CROATIAN = {"name": "Croatian", "iso639_1": "hr", "iso639_2": ["hrv"]}
|
|
||||||
CZECH = {"name": "Czech", "iso639_1": "cs", "iso639_2": ["cze"]}
|
|
||||||
DANISH = {"name": "Danish", "iso639_1": "da", "iso639_2": ["dan"]}
|
|
||||||
DUTCH = {"name": "Dutch", "iso639_1": "nl", "iso639_2": ["nld", "dut"]}
|
|
||||||
ENGLISH = {"name": "English", "iso639_1": "en", "iso639_2": ["eng"]}
|
|
||||||
ESTONIAN = {"name": "Estonian", "iso639_1": "et", "iso639_2": ["est"]}
|
|
||||||
FILIPINO = {"name": "Filipino", "iso639_1": "tl", "iso639_2": ["fil"]} # Tagalog
|
|
||||||
FINNISH = {"name": "Finnish", "iso639_1": "fi", "iso639_2": ["fin"]}
|
|
||||||
FRENCH = {"name": "French", "iso639_1": "fr", "iso639_2": ["fra", "fre"]}
|
|
||||||
GEORGIAN = {"name": "Georgian", "iso639_1": "ka", "iso639_2": ["geo"]}
|
|
||||||
GERMAN = {"name": "German", "iso639_1": "de", "iso639_2": ["deu", "ger"]}
|
|
||||||
GREEK = {"name": "Greek", "iso639_1": "el", "iso639_2": ["gre"]}
|
|
||||||
HEBREW = {"name": "Hebrew", "iso639_1": "he", "iso639_2": ["heb"]}
|
|
||||||
HINDI = {"name": "Hindi", "iso639_1": "hi", "iso639_2": ["hin"]}
|
|
||||||
HUNGARIAN = {"name": "Hungarian", "iso639_1": "hu", "iso639_2": ["hun"]}
|
|
||||||
ICELANDIC = {"name": "Icelandic", "iso639_1": "is", "iso639_2": ["ice"]}
|
|
||||||
INDONESIAN = {"name": "Indonesian", "iso639_1": "id", "iso639_2": ["ind"]}
|
|
||||||
IRISH = {"name": "Irish", "iso639_1": "ga", "iso639_2": ["gle"]}
|
|
||||||
ITALIAN = {"name": "Italian", "iso639_1": "it", "iso639_2": ["ita"]}
|
|
||||||
JAPANESE = {"name": "Japanese", "iso639_1": "ja", "iso639_2": ["jpn"]}
|
|
||||||
KAZAKH = {"name": "Kazakh", "iso639_1": "kk", "iso639_2": ["kaz"]}
|
|
||||||
KOREAN = {"name": "Korean", "iso639_1": "ko", "iso639_2": ["kor"]}
|
|
||||||
LATIN = {"name": "Latin", "iso639_1": "la", "iso639_2": ["lat"]}
|
|
||||||
LATVIAN = {"name": "Latvian", "iso639_1": "lv", "iso639_2": ["lav"]}
|
|
||||||
LITHUANIAN = {"name": "Lithuanian", "iso639_1": "lt", "iso639_2": ["lit"]}
|
|
||||||
MACEDONIAN = {"name": "Macedonian", "iso639_1": "mk", "iso639_2": ["mac"]}
|
|
||||||
MALAY = {"name": "Malay", "iso639_1": "ms", "iso639_2": ["may"]}
|
|
||||||
MALTESE = {"name": "Maltese", "iso639_1": "mt", "iso639_2": ["mlt"]}
|
|
||||||
NORWEGIAN = {"name": "Norwegian", "iso639_1": "no", "iso639_2": ["nor"]}
|
|
||||||
PERSIAN = {"name": "Persian", "iso639_1": "fa", "iso639_2": ["per"]}
|
|
||||||
POLISH = {"name": "Polish", "iso639_1": "pl", "iso639_2": ["pol"]}
|
|
||||||
PORTUGUESE = {"name": "Portuguese", "iso639_1": "pt", "iso639_2": ["por"]}
|
|
||||||
ROMANIAN = {"name": "Romanian", "iso639_1": "ro", "iso639_2": ["rum"]}
|
|
||||||
RUSSIAN = {"name": "Russian", "iso639_1": "ru", "iso639_2": ["rus"]}
|
|
||||||
NORTHERN_SAMI = {"name": "Northern Sami", "iso639_1": "se", "iso639_2": ["sme"]}
|
|
||||||
SAMOAN = {"name": "Samoan", "iso639_1": "sm", "iso639_2": ["smo"]}
|
|
||||||
SANGO = {"name": "Sango", "iso639_1": "sg", "iso639_2": ["sag"]}
|
|
||||||
SANSKRIT = {"name": "Sanskrit", "iso639_1": "sa", "iso639_2": ["san"]}
|
|
||||||
SARDINIAN = {"name": "Sardinian", "iso639_1": "sc", "iso639_2": ["srd"]}
|
|
||||||
SERBIAN = {"name": "Serbian", "iso639_1": "sr", "iso639_2": ["srp"]}
|
|
||||||
SHONA = {"name": "Shona", "iso639_1": "sn", "iso639_2": ["sna"]}
|
|
||||||
SINDHI = {"name": "Sindhi", "iso639_1": "sd", "iso639_2": ["snd"]}
|
|
||||||
SINHALA = {"name": "Sinhala", "iso639_1": "si", "iso639_2": ["sin"]}
|
|
||||||
SLOVAK = {"name": "Slovak", "iso639_1": "sk", "iso639_2": ["slo", "slk"]}
|
|
||||||
SLOVENIAN = {"name": "Slovenian", "iso639_1": "sl", "iso639_2": ["slv"]}
|
|
||||||
SOMALI = {"name": "Somali", "iso639_1": "so", "iso639_2": ["som"]}
|
|
||||||
SOUTHERN_SOTHO = {"name": "Southern Sotho", "iso639_1": "st", "iso639_2": ["sot"]}
|
|
||||||
SPANISH = {"name": "Spanish", "iso639_1": "es", "iso639_2": ["spa"]}
|
|
||||||
SUNDANESE = {"name": "Sundanese", "iso639_1": "su", "iso639_2": ["sun"]}
|
|
||||||
SWAHILI = {"name": "Swahili", "iso639_1": "sw", "iso639_2": ["swa"]}
|
|
||||||
SWATI = {"name": "Swati", "iso639_1": "ss", "iso639_2": ["ssw"]}
|
|
||||||
SWEDISH = {"name": "Swedish", "iso639_1": "sv", "iso639_2": ["swe"]}
|
|
||||||
TAGALOG = {"name": "Tagalog", "iso639_1": "tl", "iso639_2": ["tgl"]}
|
|
||||||
TAMIL = {"name": "Tamil", "iso639_1": "ta", "iso639_2": ["tam"]}
|
|
||||||
THAI = {"name": "Thai", "iso639_1": "th", "iso639_2": ["tha"]}
|
|
||||||
TURKISH = {"name": "Turkish", "iso639_1": "tr", "iso639_2": ["tur"]}
|
|
||||||
UKRAINIAN = {"name": "Ukrainian", "iso639_1": "uk", "iso639_2": ["ukr"]}
|
|
||||||
URDU = {"name": "Urdu", "iso639_1": "ur", "iso639_2": ["urd"]}
|
|
||||||
VIETNAMESE = {"name": "Vietnamese", "iso639_1": "vi", "iso639_2":[ "vie"]}
|
|
||||||
WELSH = {"name": "Welsh", "iso639_1": "cy", "iso639_2": ["wel"]}
|
|
||||||
|
|
||||||
UNDEFINED = {"name": "undefined", "iso639_1": "xx", "iso639_2": ["und"]}
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def find(label : str):
|
|
||||||
|
|
||||||
closestMatches = difflib.get_close_matches(label, [l.value["name"] for l in IsoLanguage], n=1)
|
|
||||||
|
|
||||||
if closestMatches:
|
|
||||||
foundLangs = [l for l in IsoLanguage if l.value['name'] == closestMatches[0]]
|
|
||||||
return foundLangs[0] if foundLangs else IsoLanguage.UNDEFINED
|
|
||||||
else:
|
|
||||||
return IsoLanguage.UNDEFINED
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def findThreeLetter(theeLetter : str):
|
|
||||||
foundLangs = [l for l in IsoLanguage if str(theeLetter) in l.value['iso639_2']]
|
|
||||||
return foundLangs[0] if foundLangs else IsoLanguage.UNDEFINED
|
|
||||||
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
return str(self.value['name'])
|
|
||||||
|
|
||||||
def twoLetter(self):
|
|
||||||
return str(self.value['iso639_1'])
|
|
||||||
|
|
||||||
def threeLetter(self):
|
|
||||||
return str(self.value['iso639_2'][0])
|
|
||||||
|
|
||||||
|
|
||||||
@@ -1,48 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
|
|
||||||
from ffx.tag_controller import TagController
|
|
||||||
from ffx.track_controller import TrackController
|
|
||||||
|
|
||||||
class MediaController():
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__logger = context['logger']
|
|
||||||
|
|
||||||
self.__tc = TrackController(context = context)
|
|
||||||
self.__tac = TagController(context = context)
|
|
||||||
|
|
||||||
def setPatternMediaDescriptor(self, mediaDescriptor: MediaDescriptor, patternId: int):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
pid = int(patternId)
|
|
||||||
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Pattern).filter(Pattern.id == pid)
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
pattern = q.first
|
|
||||||
|
|
||||||
for mediaTagKey, mediaTagValue in mediaDescriptor.getTags():
|
|
||||||
self.__tac.updateMediaTag(pid, mediaTagKey, mediaTagValue)
|
|
||||||
# for trackDescriptor in mediaDescriptor.getAllTrackDescriptors():
|
|
||||||
for trackDescriptor in mediaDescriptor.getTrackDescriptors():
|
|
||||||
self.__tc.addTrack(trackDescriptor, patternId = pid)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
self.__logger.error(f"MediaController.setPatternMediaDescriptor(): {repr(ex)}")
|
|
||||||
raise click.ClickException(f"MediaController.setPatternMediaDescriptor(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
@@ -1,512 +0,0 @@
|
|||||||
import os, re, click, logging
|
|
||||||
|
|
||||||
from typing import List, Self
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
from ffx.iso_language import IsoLanguage
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
class MediaDescriptor:
|
|
||||||
"""This class represents the structural content of a media file including streams and metadata"""
|
|
||||||
|
|
||||||
CONTEXT_KEY = "context"
|
|
||||||
|
|
||||||
TAGS_KEY = "tags"
|
|
||||||
TRACKS_KEY = "tracks"
|
|
||||||
|
|
||||||
TRACK_DESCRIPTOR_LIST_KEY = "track_descriptors"
|
|
||||||
CLEAR_TAGS_FLAG_KEY = "clear_tags"
|
|
||||||
|
|
||||||
FFPROBE_DISPOSITION_KEY = "disposition"
|
|
||||||
FFPROBE_TAGS_KEY = "tags"
|
|
||||||
FFPROBE_CODEC_TYPE_KEY = "codec_type"
|
|
||||||
|
|
||||||
#407 remove as well
|
|
||||||
EXCLUDED_MEDIA_TAGS = ["creation_time"]
|
|
||||||
|
|
||||||
SEASON_EPISODE_STREAM_LANGUAGE_DISPOSITIONS_MATCH = '[sS]([0-9]+)[eE]([0-9]+)_([0-9]+)_([a-z]{3})(?:_([A-Z]{3}))*'
|
|
||||||
STREAM_LANGUAGE_DISPOSITIONS_MATCH = '([0-9]+)_([a-z]{3})(?:_([A-Z]{3}))*'
|
|
||||||
|
|
||||||
SUBTITLE_FILE_EXTENSION = 'vtt'
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
if MediaDescriptor.CONTEXT_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[MediaDescriptor.CONTEXT_KEY]) is not dict:
|
|
||||||
raise TypeError(
|
|
||||||
f"MediaDescriptor.__init__(): Argument {MediaDescriptor.CONTEXT_KEY} is required to be of type dict"
|
|
||||||
)
|
|
||||||
self.__context = kwargs[MediaDescriptor.CONTEXT_KEY]
|
|
||||||
self.__logger = self.__context['logger']
|
|
||||||
else:
|
|
||||||
self.__context = {}
|
|
||||||
self.__logger = logging.getLogger('FFX')
|
|
||||||
self.__logger.addHandler(logging.NullHandler())
|
|
||||||
|
|
||||||
if MediaDescriptor.TAGS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[MediaDescriptor.TAGS_KEY]) is not dict:
|
|
||||||
raise TypeError(
|
|
||||||
f"MediaDescriptor.__init__(): Argument {MediaDescriptor.TAGS_KEY} is required to be of type dict"
|
|
||||||
)
|
|
||||||
self.__mediaTags = kwargs[MediaDescriptor.TAGS_KEY]
|
|
||||||
else:
|
|
||||||
self.__mediaTags = {}
|
|
||||||
|
|
||||||
if MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY in kwargs.keys():
|
|
||||||
if (
|
|
||||||
type(kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY]) is not list
|
|
||||||
): # Use List typehint for TrackDescriptor as well if it works
|
|
||||||
raise TypeError(
|
|
||||||
f"MediaDescriptor.__init__(): Argument {MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY} is required to be of type list"
|
|
||||||
)
|
|
||||||
for d in kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY]:
|
|
||||||
if type(d) is not TrackDescriptor:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): All elements of argument list {MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY} are required to be of type TrackDescriptor"
|
|
||||||
)
|
|
||||||
self.__trackDescriptors = kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY]
|
|
||||||
else:
|
|
||||||
self.__trackDescriptors = []
|
|
||||||
|
|
||||||
def setTrackLanguage(self, language: str, index: int, trackType: TrackType = None):
|
|
||||||
|
|
||||||
trackLanguage = IsoLanguage.findThreeLetter(language)
|
|
||||||
if trackLanguage == IsoLanguage.UNDEFINED:
|
|
||||||
self.__logger.warning('MediaDescriptor.setTrackLanguage(): Parameter language does not contain a registered '
|
|
||||||
+ f"ISO 639 3-letter language code, skipping to set language for"
|
|
||||||
+ str('' if trackType is None else trackType.label()) + f"track {index}")
|
|
||||||
|
|
||||||
trackList = self.getTrackDescriptors(trackType=trackType)
|
|
||||||
|
|
||||||
if index < 0 or index > len(trackList) - 1:
|
|
||||||
self.__logger.warning(f"MediaDescriptor.setTrackLanguage(): Parameter index ({index}) is "
|
|
||||||
+ f"out of range of {'' if trackType is None else trackType.label()}track list")
|
|
||||||
|
|
||||||
td: TrackDescriptor = trackList[index]
|
|
||||||
td.setLanguage(trackLanguage)
|
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
def setTrackTitle(self, title: str, index: int, trackType: TrackType = None):
|
|
||||||
|
|
||||||
trackList = self.getTrackDescriptors(trackType=trackType)
|
|
||||||
|
|
||||||
if index < 0 or index > len(trackList) - 1:
|
|
||||||
self.__logger.error(f"MediaDescriptor.setTrackTitle(): Parameter index ({index}) is "
|
|
||||||
+ f"out of range of {'' if trackType is None else trackType.label()}track list")
|
|
||||||
raise click.Abort()
|
|
||||||
|
|
||||||
td: TrackDescriptor = trackList[index]
|
|
||||||
td.setTitle(title)
|
|
||||||
|
|
||||||
|
|
||||||
def setDefaultSubTrack(self, trackType: TrackType, subIndex: int):
|
|
||||||
# for t in self.getAllTrackDescriptors():
|
|
||||||
for t in self.getTrackDescriptors():
|
|
||||||
if t.getType() == trackType:
|
|
||||||
t.setDispositionFlag(
|
|
||||||
TrackDisposition.DEFAULT, t.getSubIndex() == int(subIndex)
|
|
||||||
)
|
|
||||||
|
|
||||||
def setForcedSubTrack(self, trackType: TrackType, subIndex: int):
|
|
||||||
# for t in self.getAllTrackDescriptors():
|
|
||||||
for t in self.getTrackDescriptors():
|
|
||||||
if t.getType() == trackType:
|
|
||||||
t.setDispositionFlag(
|
|
||||||
TrackDisposition.FORCED, t.getSubIndex() == int(subIndex)
|
|
||||||
)
|
|
||||||
|
|
||||||
def checkConfiguration(self):
|
|
||||||
|
|
||||||
videoTracks = self.getVideoTracks()
|
|
||||||
audioTracks = self.getAudioTracks()
|
|
||||||
subtitleTracks = self.getSubtitleTracks()
|
|
||||||
|
|
||||||
if len([v for v in videoTracks if v.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
raise ValueError('More than one default video track')
|
|
||||||
if len([a for a in audioTracks if a.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
raise ValueError('More than one default audio track')
|
|
||||||
if len([s for s in subtitleTracks if s.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
raise ValueError('More than one default subtitle track')
|
|
||||||
|
|
||||||
if len([v for v in videoTracks if v.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
raise ValueError('More than one forced video track')
|
|
||||||
if len([a for a in audioTracks if a.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
raise ValueError('More than one forced audio track')
|
|
||||||
if len([s for s in subtitleTracks if s.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
raise ValueError('More than one forced subtitle track')
|
|
||||||
|
|
||||||
trackDescriptors = videoTracks + audioTracks + subtitleTracks
|
|
||||||
sourceIndices = [
|
|
||||||
t.getSourceIndex() for t in trackDescriptors
|
|
||||||
]
|
|
||||||
if len(set(sourceIndices)) < len(trackDescriptors):
|
|
||||||
raise ValueError('Multiple streams originating from the same source stream')
|
|
||||||
|
|
||||||
|
|
||||||
def applyOverrides(self, overrides: dict):
|
|
||||||
|
|
||||||
if 'languages' in overrides.keys():
|
|
||||||
for trackIndex in overrides['languages'].keys():
|
|
||||||
self.setTrackLanguage(overrides['languages'][trackIndex], trackIndex)
|
|
||||||
|
|
||||||
if 'titles' in overrides.keys():
|
|
||||||
for trackIndex in overrides['titles'].keys():
|
|
||||||
self.setTrackTitle(overrides['titles'][trackIndex], trackIndex)
|
|
||||||
|
|
||||||
if 'forced_video' in overrides.keys():
|
|
||||||
sti = int(overrides['forced_video'])
|
|
||||||
self.setForcedSubTrack(TrackType.VIDEO, sti)
|
|
||||||
self.setDefaultSubTrack(TrackType.VIDEO, sti)
|
|
||||||
|
|
||||||
elif 'default_video' in overrides.keys():
|
|
||||||
sti = int(overrides['default_video'])
|
|
||||||
self.setDefaultSubTrack(TrackType.VIDEO, sti)
|
|
||||||
|
|
||||||
if 'forced_audio' in overrides.keys():
|
|
||||||
sti = int(overrides['forced_audio'])
|
|
||||||
self.setForcedSubTrack(TrackType.AUDIO, sti)
|
|
||||||
self.setDefaultSubTrack(TrackType.AUDIO, sti)
|
|
||||||
|
|
||||||
elif 'default_audio' in overrides.keys():
|
|
||||||
sti = int(overrides['default_audio'])
|
|
||||||
self.setDefaultSubTrack(TrackType.AUDIO, sti)
|
|
||||||
|
|
||||||
if 'forced_subtitle' in overrides.keys():
|
|
||||||
sti = int(overrides['forced_subtitle'])
|
|
||||||
self.setForcedSubTrack(TrackType.SUBTITLE, sti)
|
|
||||||
self.setDefaultSubTrack(TrackType.SUBTITLE, sti)
|
|
||||||
|
|
||||||
elif 'default_subtitle' in overrides.keys():
|
|
||||||
sti = int(overrides['default_subtitle'])
|
|
||||||
self.setDefaultSubTrack(TrackType.SUBTITLE, sti)
|
|
||||||
|
|
||||||
if 'stream_order' in overrides.keys():
|
|
||||||
self.rearrangeTrackDescriptors(overrides['stream_order'])
|
|
||||||
|
|
||||||
|
|
||||||
def applySourceIndices(self, sourceMediaDescriptor: Self):
|
|
||||||
# sourceTrackDescriptors = sourceMediaDescriptor.getAllTrackDescriptors()
|
|
||||||
sourceTrackDescriptors = sourceMediaDescriptor.getTrackDescriptors()
|
|
||||||
|
|
||||||
numTrackDescriptors = len(self.__trackDescriptors)
|
|
||||||
if len(sourceTrackDescriptors) != numTrackDescriptors:
|
|
||||||
raise ValueError('MediaDescriptor.applySourceIndices (): Number of track descriptors does not match')
|
|
||||||
|
|
||||||
for trackIndex in range(numTrackDescriptors):
|
|
||||||
self.__trackDescriptors[trackIndex].setSourceIndex(sourceTrackDescriptors[trackIndex].getSourceIndex())
|
|
||||||
|
|
||||||
|
|
||||||
def rearrangeTrackDescriptors(self, newOrder: List[int]):
|
|
||||||
if len(newOrder) != len(self.__trackDescriptors):
|
|
||||||
raise ValueError('Length of list with reordered indices does not match number of track descriptors')
|
|
||||||
reorderedTrackDescriptors = {}
|
|
||||||
for oldIndex in newOrder:
|
|
||||||
reorderedTrackDescriptors.append(self.__trackDescriptors[oldIndex])
|
|
||||||
self.__trackDescriptors = reorderedTrackDescriptors
|
|
||||||
self.reindexSubIndices()
|
|
||||||
self.reindexIndices()
|
|
||||||
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def fromFfprobe(cls, context, formatData, streamData):
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
kwargs[MediaDescriptor.CONTEXT_KEY] = context
|
|
||||||
|
|
||||||
if MediaDescriptor.FFPROBE_TAGS_KEY in formatData.keys():
|
|
||||||
kwargs[MediaDescriptor.TAGS_KEY] = formatData[
|
|
||||||
MediaDescriptor.FFPROBE_TAGS_KEY
|
|
||||||
]
|
|
||||||
|
|
||||||
kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY] = []
|
|
||||||
|
|
||||||
# TODO: Evtl obsolet
|
|
||||||
subIndexCounters = {}
|
|
||||||
|
|
||||||
for streamObj in streamData:
|
|
||||||
|
|
||||||
ffprobeCodecType = streamObj[MediaDescriptor.FFPROBE_CODEC_TYPE_KEY]
|
|
||||||
trackType = TrackType.fromLabel(ffprobeCodecType)
|
|
||||||
|
|
||||||
if trackType != TrackType.UNKNOWN:
|
|
||||||
|
|
||||||
if trackType not in subIndexCounters.keys():
|
|
||||||
subIndexCounters[trackType] = 0
|
|
||||||
|
|
||||||
kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY].append(
|
|
||||||
TrackDescriptor.fromFfprobe(
|
|
||||||
streamObj, subIndex=subIndexCounters[trackType]
|
|
||||||
)
|
|
||||||
)
|
|
||||||
subIndexCounters[trackType] += 1
|
|
||||||
|
|
||||||
return cls(**kwargs)
|
|
||||||
|
|
||||||
def getTags(self):
|
|
||||||
return self.__mediaTags
|
|
||||||
|
|
||||||
|
|
||||||
def sortSubIndices(
|
|
||||||
self, descriptors: List[TrackDescriptor]
|
|
||||||
) -> List[TrackDescriptor]:
|
|
||||||
subIndex = 0
|
|
||||||
for d in descriptors:
|
|
||||||
d.setSubIndex(subIndex)
|
|
||||||
subIndex += 1
|
|
||||||
return descriptors
|
|
||||||
|
|
||||||
def reindexSubIndices(self, trackDescriptors: list = []):
|
|
||||||
tdList = trackDescriptors if trackDescriptors else self.__trackDescriptors
|
|
||||||
subIndexCounter = {}
|
|
||||||
for td in tdList:
|
|
||||||
trackType = td.getType()
|
|
||||||
if trackType not in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
td.setSubIndex(subIndexCounter[trackType])
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
def sortIndices(
|
|
||||||
self, descriptors: List[TrackDescriptor]
|
|
||||||
) -> List[TrackDescriptor]:
|
|
||||||
index = 0
|
|
||||||
for d in descriptors:
|
|
||||||
d.setIndex(index)
|
|
||||||
index += 1
|
|
||||||
return descriptors
|
|
||||||
|
|
||||||
def reindexIndices(self, trackDescriptors: list = []):
|
|
||||||
tdList = trackDescriptors if trackDescriptors else self.__trackDescriptors
|
|
||||||
for trackIndex in range(len(tdList)):
|
|
||||||
tdList[trackIndex].setIndex(trackIndex)
|
|
||||||
|
|
||||||
|
|
||||||
# def getAllTrackDescriptors(self):
|
|
||||||
# """Returns all track descriptors sorted by type: video, audio then subtitles"""
|
|
||||||
# return self.getVideoTracks() + self.getAudioTracks() + self.getSubtitleTracks()
|
|
||||||
|
|
||||||
|
|
||||||
def getTrackDescriptors(self,
|
|
||||||
trackType: TrackType = None) -> List[TrackDescriptor]:
|
|
||||||
|
|
||||||
if trackType is None:
|
|
||||||
return self.__trackDescriptors
|
|
||||||
|
|
||||||
descriptorList = []
|
|
||||||
for td in self.__trackDescriptors:
|
|
||||||
if td.getType() == trackType:
|
|
||||||
descriptorList.append(td)
|
|
||||||
|
|
||||||
return descriptorList
|
|
||||||
|
|
||||||
|
|
||||||
def getVideoTracks(self) -> List[TrackDescriptor]:
|
|
||||||
return [v for v in self.__trackDescriptors if v.getType() == TrackType.VIDEO]
|
|
||||||
|
|
||||||
def getAudioTracks(self) -> List[TrackDescriptor]:
|
|
||||||
return [a for a in self.__trackDescriptors if a.getType() == TrackType.AUDIO]
|
|
||||||
|
|
||||||
def getSubtitleTracks(self) -> List[TrackDescriptor]:
|
|
||||||
return [
|
|
||||||
s
|
|
||||||
for s in self.__trackDescriptors
|
|
||||||
if s.getType() == TrackType.SUBTITLE
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def getImportFileTokens(self, use_sub_index: bool = True):
|
|
||||||
"""Generate ffmpeg import options for external stream files"""
|
|
||||||
|
|
||||||
importFileTokens = []
|
|
||||||
|
|
||||||
td: TrackDescriptor
|
|
||||||
for td in self.__trackDescriptors:
|
|
||||||
|
|
||||||
importedFilePath = td.getExternalSourceFilePath()
|
|
||||||
|
|
||||||
if importedFilePath:
|
|
||||||
|
|
||||||
self.__logger.info(f"Substituting subtitle stream #{td.getIndex()} "
|
|
||||||
+ f"({td.getType().label()}:{td.getSubIndex()}) "
|
|
||||||
+ f"with import from file {td.getExternalSourceFilePath()}")
|
|
||||||
|
|
||||||
importFileTokens += [
|
|
||||||
"-i",
|
|
||||||
importedFilePath,
|
|
||||||
]
|
|
||||||
|
|
||||||
return importFileTokens
|
|
||||||
|
|
||||||
|
|
||||||
def getInputMappingTokens(self, use_sub_index: bool = True, only_video: bool = False):
|
|
||||||
"""Tracks must be reordered for source index order"""
|
|
||||||
|
|
||||||
inputMappingTokens = []
|
|
||||||
|
|
||||||
sortedTrackDescriptors = sorted(self.__trackDescriptors, key=lambda d: d.getIndex())
|
|
||||||
|
|
||||||
# raise click.ClickException(' '.join([f"\nindex={td.getIndex()} subIndex={td.getSubIndex()} srcIndex={td.getSourceIndex()} type={td.getType().label()}" for td in self.__trackDescriptors]))
|
|
||||||
|
|
||||||
filePointer = 1
|
|
||||||
for trackIndex in range(len(sortedTrackDescriptors)):
|
|
||||||
|
|
||||||
td: TrackDescriptor = sortedTrackDescriptors[trackIndex]
|
|
||||||
|
|
||||||
#HINT: Attached thumbnails are not supported by .webm container format
|
|
||||||
if td.getCodec() != TrackCodec.PNG:
|
|
||||||
|
|
||||||
stdi = sortedTrackDescriptors[td.getSourceIndex()].getIndex()
|
|
||||||
stdsi = sortedTrackDescriptors[td.getSourceIndex()].getSubIndex()
|
|
||||||
|
|
||||||
trackType = td.getType()
|
|
||||||
|
|
||||||
if (trackType == TrackType.VIDEO or not only_video):
|
|
||||||
|
|
||||||
importedFilePath = td.getExternalSourceFilePath()
|
|
||||||
|
|
||||||
if use_sub_index:
|
|
||||||
|
|
||||||
if importedFilePath:
|
|
||||||
|
|
||||||
inputMappingTokens += [
|
|
||||||
"-map",
|
|
||||||
f"{filePointer}:{trackType.indicator()}:0",
|
|
||||||
]
|
|
||||||
filePointer += 1
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
if not td.getCodec() in [TrackCodec.PGS, TrackCodec.VOBSUB]:
|
|
||||||
inputMappingTokens += [
|
|
||||||
"-map",
|
|
||||||
f"0:{trackType.indicator()}:{stdsi}",
|
|
||||||
]
|
|
||||||
|
|
||||||
else:
|
|
||||||
if not td.getCodec() in [TrackCodec.PGS, TrackCodec.VOBSUB]:
|
|
||||||
inputMappingTokens += ["-map", f"0:{stdi}"]
|
|
||||||
|
|
||||||
return inputMappingTokens
|
|
||||||
|
|
||||||
|
|
||||||
def searchSubtitleFiles(self, searchDirectory, prefix):
|
|
||||||
|
|
||||||
sesld_match = re.compile(f"{prefix}_{MediaDescriptor.SEASON_EPISODE_STREAM_LANGUAGE_DISPOSITIONS_MATCH}")
|
|
||||||
sld_match = re.compile(f"{prefix}_{MediaDescriptor.STREAM_LANGUAGE_DISPOSITIONS_MATCH}")
|
|
||||||
|
|
||||||
subtitleFileDescriptors = []
|
|
||||||
|
|
||||||
for subtitleFilename in os.listdir(searchDirectory):
|
|
||||||
if subtitleFilename.startswith(prefix) and subtitleFilename.endswith(
|
|
||||||
"." + MediaDescriptor.SUBTITLE_FILE_EXTENSION
|
|
||||||
):
|
|
||||||
|
|
||||||
sesld_result = sesld_match.search(subtitleFilename)
|
|
||||||
sld_result = None if not sesld_result is None else sld_match.search(subtitleFilename)
|
|
||||||
|
|
||||||
if not sesld_result is None:
|
|
||||||
|
|
||||||
subtitleFilePath = os.path.join(searchDirectory, subtitleFilename)
|
|
||||||
if os.path.isfile(subtitleFilePath):
|
|
||||||
|
|
||||||
subtitleFileDescriptor = {}
|
|
||||||
subtitleFileDescriptor["path"] = subtitleFilePath
|
|
||||||
subtitleFileDescriptor["season"] = int(sesld_result.group(1))
|
|
||||||
subtitleFileDescriptor["episode"] = int(sesld_result.group(2))
|
|
||||||
subtitleFileDescriptor["index"] = int(sesld_result.group(3))
|
|
||||||
subtitleFileDescriptor["language"] = sesld_result.group(4)
|
|
||||||
|
|
||||||
dispSet = set()
|
|
||||||
dispCaptGroups = sesld_result.groups()
|
|
||||||
numCaptGroups = len(dispCaptGroups)
|
|
||||||
if numCaptGroups > 4:
|
|
||||||
for groupIndex in range(numCaptGroups - 4):
|
|
||||||
disp = TrackDisposition.fromIndicator(dispCaptGroups[groupIndex + 4])
|
|
||||||
if disp is not None:
|
|
||||||
dispSet.add(disp)
|
|
||||||
subtitleFileDescriptor["disposition_set"] = dispSet
|
|
||||||
|
|
||||||
subtitleFileDescriptors.append(subtitleFileDescriptor)
|
|
||||||
|
|
||||||
if not sld_result is None:
|
|
||||||
|
|
||||||
subtitleFilePath = os.path.join(searchDirectory, subtitleFilename)
|
|
||||||
if os.path.isfile(subtitleFilePath):
|
|
||||||
|
|
||||||
subtitleFileDescriptor = {}
|
|
||||||
subtitleFileDescriptor["path"] = subtitleFilePath
|
|
||||||
subtitleFileDescriptor["index"] = int(sld_result.group(1))
|
|
||||||
subtitleFileDescriptor["language"] = sld_result.group(2)
|
|
||||||
|
|
||||||
dispSet = set()
|
|
||||||
dispCaptGroups = sld_result.groups()
|
|
||||||
numCaptGroups = len(dispCaptGroups)
|
|
||||||
if numCaptGroups > 2:
|
|
||||||
for groupIndex in range(numCaptGroups - 2):
|
|
||||||
disp = TrackDisposition.fromIndicator(dispCaptGroups[groupIndex + 2])
|
|
||||||
if disp is not None:
|
|
||||||
dispSet.add(disp)
|
|
||||||
subtitleFileDescriptor["disposition_set"] = dispSet
|
|
||||||
|
|
||||||
subtitleFileDescriptors.append(subtitleFileDescriptor)
|
|
||||||
|
|
||||||
|
|
||||||
self.__logger.debug(f"searchSubtitleFiles(): Available subtitle files {subtitleFileDescriptors}")
|
|
||||||
|
|
||||||
return subtitleFileDescriptors
|
|
||||||
|
|
||||||
|
|
||||||
def importSubtitles(self, searchDirectory, prefix, season: int = -1, episode: int = -1):
|
|
||||||
|
|
||||||
# click.echo(f"Season: {season} Episode: {episode}")
|
|
||||||
self.__logger.debug(f"importSubtitles(): Season: {season} Episode: {episode}")
|
|
||||||
|
|
||||||
availableFileSubtitleDescriptors = self.searchSubtitleFiles(searchDirectory, prefix)
|
|
||||||
|
|
||||||
self.__logger.debug(f"importSubtitles(): availableFileSubtitleDescriptors: {availableFileSubtitleDescriptors}")
|
|
||||||
|
|
||||||
subtitleTracks = self.getSubtitleTracks()
|
|
||||||
|
|
||||||
self.__logger.debug(f"importSubtitles(): subtitleTracks: {[s.getIndex() for s in subtitleTracks]}")
|
|
||||||
|
|
||||||
matchingSubtitleFileDescriptors = (
|
|
||||||
sorted(
|
|
||||||
[
|
|
||||||
d
|
|
||||||
for d in availableFileSubtitleDescriptors
|
|
||||||
if ((season == -1 and episode == -1)
|
|
||||||
or (d["season"] == int(season) and d["episode"] == int(episode)))
|
|
||||||
],
|
|
||||||
key=lambda d: d["index"],
|
|
||||||
)
|
|
||||||
if availableFileSubtitleDescriptors
|
|
||||||
else []
|
|
||||||
)
|
|
||||||
|
|
||||||
self.__logger.debug(f"importSubtitles(): matchingSubtitleFileDescriptors: {matchingSubtitleFileDescriptors}")
|
|
||||||
|
|
||||||
for msfd in matchingSubtitleFileDescriptors:
|
|
||||||
matchingSubtitleTrackDescriptor = [s for s in subtitleTracks if s.getIndex() == msfd["index"]]
|
|
||||||
if matchingSubtitleTrackDescriptor:
|
|
||||||
# click.echo(f"Found matching subtitle file {msfd["path"]}\n")
|
|
||||||
self.__logger.debug(f"importSubtitles(): Found matching subtitle file {msfd['path']}")
|
|
||||||
matchingSubtitleTrackDescriptor[0].setExternalSourceFilePath(msfd["path"])
|
|
||||||
|
|
||||||
# TODO: Check if useful
|
|
||||||
# matchingSubtitleTrackDescriptor[0].setDispositionSet(msfd["disposition_set"])
|
|
||||||
|
|
||||||
|
|
||||||
def getConfiguration(self, label: str = ''):
|
|
||||||
yield f"--- {label if label else 'MediaDescriptor '+str(id(self))} {' '.join([str(k)+'='+str(v) for k,v in self.__mediaTags.items()])}"
|
|
||||||
# for td in self.getAllTrackDescriptors():
|
|
||||||
for td in self.getTrackDescriptors():
|
|
||||||
yield (f"{td.getIndex()}:{td.getType().indicator()}:{td.getSubIndex()} "
|
|
||||||
+ '|'.join([d.indicator() for d in td.getDispositionSet()])
|
|
||||||
+ ' ' + ' '.join([str(k)+'='+str(v) for k,v in td.getTags().items()]))
|
|
||||||
@@ -1,302 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
from ffx.helper import dictDiff, setDiff, DIFF_ADDED_KEY, DIFF_CHANGED_KEY, DIFF_REMOVED_KEY, DIFF_UNCHANGED_KEY
|
|
||||||
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
|
|
||||||
|
|
||||||
class MediaDescriptorChangeSet():
|
|
||||||
|
|
||||||
TAGS_KEY = "tags"
|
|
||||||
TRACKS_KEY = "tracks"
|
|
||||||
DISPOSITION_SET_KEY = "disposition_set"
|
|
||||||
|
|
||||||
TRACK_DESCRIPTOR_KEY = "track_descriptor"
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context,
|
|
||||||
targetMediaDescriptor: MediaDescriptor = None,
|
|
||||||
sourceMediaDescriptor: MediaDescriptor = None):
|
|
||||||
|
|
||||||
self.__context = context
|
|
||||||
self.__logger = context['logger']
|
|
||||||
|
|
||||||
self.__configurationData = self.__context['config'].getData()
|
|
||||||
|
|
||||||
metadataConfiguration = self.__configurationData['metadata'] if 'metadata' in self.__configurationData.keys() else {}
|
|
||||||
|
|
||||||
self.__signatureTags = metadataConfiguration['signature'] if 'signature' in metadataConfiguration.keys() else {}
|
|
||||||
self.__removeGlobalKeys = metadataConfiguration['remove'] if 'remove' in metadataConfiguration.keys() else []
|
|
||||||
self.__ignoreGlobalKeys = metadataConfiguration['ignore'] if 'ignore' in metadataConfiguration.keys() else []
|
|
||||||
self.__removeTrackKeys = (metadataConfiguration['streams']['remove']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'remove' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
self.__ignoreTrackKeys = (metadataConfiguration['streams']['ignore']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'ignore' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
|
|
||||||
|
|
||||||
self.__targetTrackDescriptors = targetMediaDescriptor.getTrackDescriptors() if targetMediaDescriptor is not None else []
|
|
||||||
self.__sourceTrackDescriptors = sourceMediaDescriptor.getTrackDescriptors() if sourceMediaDescriptor is not None else []
|
|
||||||
|
|
||||||
targetMediaTags = targetMediaDescriptor.getTags() if targetMediaDescriptor is not None else {}
|
|
||||||
sourceMediaTags = sourceMediaDescriptor.getTags() if sourceMediaDescriptor is not None else {}
|
|
||||||
|
|
||||||
|
|
||||||
self.__changeSetObj = {}
|
|
||||||
|
|
||||||
#if targetMediaDescriptor is not None:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#!!#
|
|
||||||
tagsDiff = dictDiff(sourceMediaTags,
|
|
||||||
targetMediaTags,
|
|
||||||
ignoreKeys=self.__ignoreGlobalKeys,
|
|
||||||
removeKeys=self.__removeGlobalKeys)
|
|
||||||
|
|
||||||
if tagsDiff:
|
|
||||||
self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY] = tagsDiff
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
self.__numTargetTracks = len(self.__targetTrackDescriptors)
|
|
||||||
|
|
||||||
# Current track configuration (of file)
|
|
||||||
|
|
||||||
self.__numSourceTracks = len(self.__sourceTrackDescriptors)
|
|
||||||
|
|
||||||
maxNumOfTracks = max(self.__numSourceTracks, self.__numTargetTracks)
|
|
||||||
|
|
||||||
trackCompareResult = {}
|
|
||||||
|
|
||||||
|
|
||||||
for trackIndex in range(maxNumOfTracks):
|
|
||||||
|
|
||||||
correspondingSourceTrackDescriptors = [st for st in self.__sourceTrackDescriptors if st.getIndex() == trackIndex]
|
|
||||||
correspondingTargetTrackDescriptors = [tt for tt in self.__targetTrackDescriptors if tt.getIndex() == trackIndex]
|
|
||||||
|
|
||||||
# Track present in target but not in source
|
|
||||||
if (not correspondingSourceTrackDescriptors
|
|
||||||
and correspondingTargetTrackDescriptors):
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY not in trackCompareResult.keys():
|
|
||||||
trackCompareResult[DIFF_ADDED_KEY] = {}
|
|
||||||
|
|
||||||
trackCompareResult[DIFF_ADDED_KEY][trackIndex] = correspondingTargetTrackDescriptors[0]
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Track present in target but not in source
|
|
||||||
if (correspondingSourceTrackDescriptors
|
|
||||||
and not correspondingTargetTrackDescriptors):
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY not in trackCompareResult.keys():
|
|
||||||
trackCompareResult[DIFF_REMOVED_KEY] = {}
|
|
||||||
|
|
||||||
trackCompareResult[DIFF_REMOVED_KEY][trackIndex] = correspondingSourceTrackDescriptors[0]
|
|
||||||
continue
|
|
||||||
|
|
||||||
if (correspondingSourceTrackDescriptors
|
|
||||||
and correspondingTargetTrackDescriptors):
|
|
||||||
|
|
||||||
# if correspondingTargetTrackDescriptors[0].getIndex() == 3:
|
|
||||||
# raise click.ClickException(f"{correspondingSourceTrackDescriptors[0].getDispositionSet()} {correspondingTargetTrackDescriptors[0].getDispositionSet()}")
|
|
||||||
|
|
||||||
|
|
||||||
trackDiff = self.compareTracks(correspondingTargetTrackDescriptors[0],
|
|
||||||
correspondingSourceTrackDescriptors[0])
|
|
||||||
|
|
||||||
if trackDiff:
|
|
||||||
if DIFF_CHANGED_KEY not in trackCompareResult.keys():
|
|
||||||
trackCompareResult[DIFF_CHANGED_KEY] = {}
|
|
||||||
|
|
||||||
trackCompareResult[DIFF_CHANGED_KEY][trackIndex] = trackDiff
|
|
||||||
|
|
||||||
|
|
||||||
if trackCompareResult:
|
|
||||||
self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY] = trackCompareResult
|
|
||||||
|
|
||||||
|
|
||||||
def compareTracks(self,
|
|
||||||
targetTrackDescriptor: TrackDescriptor = None,
|
|
||||||
sourceTrackDescriptor: TrackDescriptor = None):
|
|
||||||
|
|
||||||
sourceTrackTags = sourceTrackDescriptor.getTags() if sourceTrackDescriptor is not None else {}
|
|
||||||
targetTrackTags = targetTrackDescriptor.getTags() if targetTrackDescriptor is not None else {}
|
|
||||||
|
|
||||||
trackCompareResult = {}
|
|
||||||
|
|
||||||
tagsDiffResult = dictDiff(sourceTrackTags,
|
|
||||||
targetTrackTags,
|
|
||||||
ignoreKeys=self.__ignoreTrackKeys,
|
|
||||||
removeKeys=self.__removeTrackKeys)
|
|
||||||
|
|
||||||
if tagsDiffResult:
|
|
||||||
trackCompareResult[MediaDescriptorChangeSet.TAGS_KEY] = tagsDiffResult
|
|
||||||
|
|
||||||
sourceDispositionSet = sourceTrackDescriptor.getDispositionSet() if sourceTrackDescriptor is not None else set()
|
|
||||||
targetDispositionSet = targetTrackDescriptor.getDispositionSet() if targetTrackDescriptor is not None else set()
|
|
||||||
|
|
||||||
# if targetTrackDescriptor.getIndex() == 3:
|
|
||||||
# raise click.ClickException(f"{sourceDispositionSet} {targetDispositionSet}")
|
|
||||||
|
|
||||||
dispositionDiffResult = setDiff(sourceDispositionSet, targetDispositionSet)
|
|
||||||
|
|
||||||
if dispositionDiffResult:
|
|
||||||
trackCompareResult[MediaDescriptorChangeSet.DISPOSITION_SET_KEY] = dispositionDiffResult
|
|
||||||
|
|
||||||
return trackCompareResult
|
|
||||||
|
|
||||||
|
|
||||||
def generateDispositionTokens(self):
|
|
||||||
"""
|
|
||||||
#Example: -disposition:s:0 default -disposition:s:1 0
|
|
||||||
"""
|
|
||||||
dispositionTokens = []
|
|
||||||
|
|
||||||
# if MediaDescriptorChangeSet.TRACKS_KEY in self.__changeSetObj.keys():
|
|
||||||
#
|
|
||||||
# if DIFF_ADDED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
# addedTracks: dict = self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY]
|
|
||||||
# trackDescriptor: TrackDescriptor
|
|
||||||
# for trackDescriptor in addedTracks.values():
|
|
||||||
#
|
|
||||||
# dispositionSet = trackDescriptor.getDispositionSet()
|
|
||||||
#
|
|
||||||
# if dispositionSet:
|
|
||||||
# dispositionTokens += [f"-disposition:{trackDescriptor.getType().indicator()}:{trackDescriptor.getSubIndex()}",
|
|
||||||
# '+'.join([d.label() for d in dispositionSet])]
|
|
||||||
#
|
|
||||||
# if DIFF_CHANGED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
# changedTracks: dict = self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_CHANGED_KEY]
|
|
||||||
# trackDiffObj: dict
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# for trackIndex, trackDiffObj in changedTracks.items():
|
|
||||||
#
|
|
||||||
# if MediaDescriptorChangeSet.DISPOSITION_SET_KEY in trackDiffObj.keys():
|
|
||||||
#
|
|
||||||
# dispositionDiffObj: dict = trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY]
|
|
||||||
#
|
|
||||||
# addedDispositions = dispositionDiffObj[DIFF_ADDED_KEY] if DIFF_ADDED_KEY in dispositionDiffObj.keys() else set()
|
|
||||||
# removedDispositions = dispositionDiffObj[DIFF_REMOVED_KEY] if DIFF_REMOVED_KEY in dispositionDiffObj.keys() else set()
|
|
||||||
# unchangedDispositions = dispositionDiffObj[DIFF_UNCHANGED_KEY] if DIFF_UNCHANGED_KEY in dispositionDiffObj.keys() else set()
|
|
||||||
#
|
|
||||||
# targetDispositions = addedDispositions | unchangedDispositions
|
|
||||||
#
|
|
||||||
# trackDescriptor = self.__targetTrackDescriptors[trackIndex]
|
|
||||||
# streamIndicator = trackDescriptor.getType().indicator()
|
|
||||||
# subIndex = trackDescriptor.getSubIndex()
|
|
||||||
#
|
|
||||||
# if targetDispositions:
|
|
||||||
# dispositionTokens += [f"-disposition:{streamIndicator}:{subIndex}", '+'.join([d.label() for d in targetDispositions])]
|
|
||||||
# # if not targetDispositions and removedDispositions:
|
|
||||||
# else:
|
|
||||||
# dispositionTokens += [f"-disposition:{streamIndicator}:{subIndex}", '0']
|
|
||||||
for ttd in self.__targetTrackDescriptors:
|
|
||||||
|
|
||||||
targetDispositions = ttd.getDispositionSet()
|
|
||||||
streamIndicator = ttd.getType().indicator()
|
|
||||||
subIndex = ttd.getSubIndex()
|
|
||||||
|
|
||||||
if targetDispositions:
|
|
||||||
dispositionTokens += [f"-disposition:{streamIndicator}:{subIndex}", '+'.join([d.label() for d in targetDispositions])]
|
|
||||||
# if not targetDispositions and removedDispositions:
|
|
||||||
else:
|
|
||||||
dispositionTokens += [f"-disposition:{streamIndicator}:{subIndex}", '0']
|
|
||||||
|
|
||||||
return dispositionTokens
|
|
||||||
|
|
||||||
|
|
||||||
def generateMetadataTokens(self):
|
|
||||||
|
|
||||||
metadataTokens = []
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TAGS_KEY in self.__changeSetObj.keys():
|
|
||||||
|
|
||||||
addedMediaTags = (self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_ADDED_KEY]
|
|
||||||
if DIFF_ADDED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
removedMediaTags = (self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_REMOVED_KEY]
|
|
||||||
if DIFF_REMOVED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
changedMediaTags = (self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_CHANGED_KEY]
|
|
||||||
if DIFF_CHANGED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
|
|
||||||
outputMediaTags = addedMediaTags | changedMediaTags
|
|
||||||
|
|
||||||
if (not 'no_signature' in self.__context.keys()
|
|
||||||
or not self.__context['no_signature']):
|
|
||||||
outputMediaTags = outputMediaTags | self.__signatureTags
|
|
||||||
|
|
||||||
# outputMediaTags = {k:v for k,v in outputMediaTags.items() if k not in self.__removeGlobalKeys}
|
|
||||||
|
|
||||||
for tagKey, tagValue in outputMediaTags.items():
|
|
||||||
metadataTokens += [f"-metadata:g",
|
|
||||||
f"{tagKey}={tagValue}"]
|
|
||||||
|
|
||||||
for tagKey, tagValue in changedMediaTags.items():
|
|
||||||
metadataTokens += [f"-metadata:g",
|
|
||||||
f"{tagKey}={tagValue}"]
|
|
||||||
|
|
||||||
for removeKey in removedMediaTags.keys():
|
|
||||||
metadataTokens += [f"-metadata:g",
|
|
||||||
f"{removeKey}="]
|
|
||||||
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TRACKS_KEY in self.__changeSetObj.keys():
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
addedTracks: dict = self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY]
|
|
||||||
trackDescriptor: TrackDescriptor
|
|
||||||
for trackDescriptor in addedTracks.values():
|
|
||||||
for tagKey, tagValue in trackDescriptor.getTags().items():
|
|
||||||
if not tagKey in self.__removeTrackKeys:
|
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
|
||||||
f"{tagKey}={tagValue}"]
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
changedTracks: dict = self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_CHANGED_KEY]
|
|
||||||
trackDiffObj: dict
|
|
||||||
for trackIndex, trackDiffObj in changedTracks.items():
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TAGS_KEY in trackDiffObj.keys():
|
|
||||||
|
|
||||||
tagsDiffObj = trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY]
|
|
||||||
|
|
||||||
addedTrackTags = tagsDiffObj[DIFF_ADDED_KEY] if DIFF_ADDED_KEY in tagsDiffObj.keys() else {}
|
|
||||||
changedTrackTags = tagsDiffObj[DIFF_CHANGED_KEY] if DIFF_CHANGED_KEY in tagsDiffObj.keys() else {}
|
|
||||||
unchangedTrackTags = tagsDiffObj[DIFF_UNCHANGED_KEY] if DIFF_UNCHANGED_KEY in tagsDiffObj.keys() else {}
|
|
||||||
removedTrackTags = tagsDiffObj[DIFF_REMOVED_KEY] if DIFF_REMOVED_KEY in tagsDiffObj.keys() else {}
|
|
||||||
|
|
||||||
outputTrackTags = addedTrackTags | changedTrackTags
|
|
||||||
|
|
||||||
trackDescriptor = self.__targetTrackDescriptors[trackIndex]
|
|
||||||
|
|
||||||
for tagKey, tagValue in outputTrackTags.items():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
|
||||||
f"{tagKey}={tagValue}"]
|
|
||||||
|
|
||||||
for removeKey in removedTrackTags.keys():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
|
||||||
f"{removeKey}="]
|
|
||||||
|
|
||||||
#HINT: In case of loading a track from an external file
|
|
||||||
# no tags from source are present for the track so
|
|
||||||
# the unchanged tracks are passed to the output file as well
|
|
||||||
if trackDescriptor.getExternalSourceFilePath():
|
|
||||||
for tagKey, tagValue in unchangedTrackTags.items():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
|
||||||
f"{tagKey}={tagValue}"]
|
|
||||||
|
|
||||||
return metadataTokens
|
|
||||||
|
|
||||||
|
|
||||||
def getChangeSetObj(self):
|
|
||||||
return self.__changeSetObj
|
|
||||||
@@ -1,757 +0,0 @@
|
|||||||
import os, click, re
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, Input, DataTable
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from ffx.audio_layout import AudioLayout
|
|
||||||
|
|
||||||
from .pattern_controller import PatternController
|
|
||||||
from .show_controller import ShowController
|
|
||||||
from .track_controller import TrackController
|
|
||||||
from .tag_controller import TagController
|
|
||||||
|
|
||||||
from .show_details_screen import ShowDetailsScreen
|
|
||||||
from .pattern_details_screen import PatternDetailsScreen
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
from ffx.model.track import Track
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
from textual.widgets._data_table import CellDoesNotExist
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.file_properties import FileProperties
|
|
||||||
|
|
||||||
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
|
|
||||||
|
|
||||||
from ffx.helper import formatRichColor, DIFF_ADDED_KEY, DIFF_CHANGED_KEY, DIFF_REMOVED_KEY, DIFF_UNCHANGED_KEY
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class MediaDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 5 8;
|
|
||||||
grid-rows: 8 2 2 2 2 8 2 2 8;
|
|
||||||
grid-columns: 15 25 90 10 105;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
min-height: 40;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
|
|
||||||
.triple {
|
|
||||||
row-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
|
|
||||||
.purple {
|
|
||||||
tint: purple 40%;
|
|
||||||
}
|
|
||||||
|
|
||||||
.yellow {
|
|
||||||
tint: yellow 40%;
|
|
||||||
}
|
|
||||||
|
|
||||||
#differences-table {
|
|
||||||
row-span: 8;
|
|
||||||
/* tint: magenta 40%; */
|
|
||||||
}
|
|
||||||
|
|
||||||
/* #pattern_input {
|
|
||||||
tint: red 40%;
|
|
||||||
}*/
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
TRACKS_TABLE_INDEX_COLUMN_LABEL = "Index"
|
|
||||||
TRACKS_TABLE_TYPE_COLUMN_LABEL = "Type"
|
|
||||||
TRACKS_TABLE_SUB_INDEX_COLUMN_LABEL = "SubIndex"
|
|
||||||
TRACKS_TABLE_CODEC_COLUMN_LABEL = "Codec"
|
|
||||||
TRACKS_TABLE_LAYOUT_COLUMN_LABEL = "Layout"
|
|
||||||
TRACKS_TABLE_LANGUAGE_COLUMN_LABEL = "Language"
|
|
||||||
TRACKS_TABLE_TITLE_COLUMN_LABEL = "Title"
|
|
||||||
TRACKS_TABLE_DEFAULT_COLUMN_LABEL = "Default"
|
|
||||||
TRACKS_TABLE_FORCED_COLUMN_LABEL = "Forced"
|
|
||||||
|
|
||||||
DIFFERENCES_TABLE_DIFFERENCES_COLUMN_LABEL = 'Differences (file->db/output)'
|
|
||||||
|
|
||||||
|
|
||||||
BINDINGS = [
|
|
||||||
("n", "new_pattern", "New Pattern"),
|
|
||||||
("u", "update_pattern", "Update Pattern"),
|
|
||||||
("e", "edit_pattern", "Edit Pattern"),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
|
|
||||||
self.__configurationData = self.context['config'].getData()
|
|
||||||
|
|
||||||
metadataConfiguration = self.__configurationData['metadata'] if 'metadata' in self.__configurationData.keys() else {}
|
|
||||||
|
|
||||||
self.__signatureTags = metadataConfiguration['signature'] if 'signature' in metadataConfiguration.keys() else {}
|
|
||||||
self.__removeGlobalKeys = metadataConfiguration['remove'] if 'remove' in metadataConfiguration.keys() else []
|
|
||||||
self.__ignoreGlobalKeys = metadataConfiguration['ignore'] if 'ignore' in metadataConfiguration.keys() else []
|
|
||||||
self.__removeTrackKeys = (metadataConfiguration['streams']['remove']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'remove' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
self.__ignoreTrackKeys = (metadataConfiguration['streams']['ignore']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'ignore' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
|
|
||||||
|
|
||||||
self.__pc = PatternController(context = self.context)
|
|
||||||
self.__sc = ShowController(context = self.context)
|
|
||||||
self.__tc = TrackController(context = self.context)
|
|
||||||
self.__tac = TagController(context = self.context)
|
|
||||||
|
|
||||||
if not 'command' in self.context.keys() or self.context['command'] != 'inspect':
|
|
||||||
raise click.ClickException(f"MediaDetailsScreen.__init__(): Can only perform command 'inspect'")
|
|
||||||
|
|
||||||
if not 'arguments' in self.context.keys() or not 'filename' in self.context['arguments'].keys() or not self.context['arguments']['filename']:
|
|
||||||
raise click.ClickException(f"MediaDetailsScreen.__init__(): Argument 'filename' is required to be provided for command 'inspect'")
|
|
||||||
|
|
||||||
self.__mediaFilename = self.context['arguments']['filename']
|
|
||||||
|
|
||||||
if not os.path.isfile(self.__mediaFilename):
|
|
||||||
raise click.ClickException(f"MediaDetailsScreen.__init__(): Media file {self.__mediaFilename} does not exist")
|
|
||||||
|
|
||||||
self.loadProperties()
|
|
||||||
|
|
||||||
|
|
||||||
def removeShow(self, showId : int = -1):
|
|
||||||
"""Remove show entry from DataTable.
|
|
||||||
Removes the <New show> entry if showId is not set"""
|
|
||||||
|
|
||||||
for rowKey, row in self.showsTable.rows.items(): # dict[RowKey, Row]
|
|
||||||
|
|
||||||
rowData = self.showsTable.get_row(rowKey)
|
|
||||||
|
|
||||||
try:
|
|
||||||
if (showId == -1 and rowData[0] == ' '
|
|
||||||
or showId == int(rowData[0])):
|
|
||||||
self.showsTable.remove_row(rowKey)
|
|
||||||
return
|
|
||||||
except:
|
|
||||||
continue
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def getRowIndexFromShowId(self, showId : int = -1) -> int:
|
|
||||||
"""Find the index of the row where the value in the specified column matches the target_value."""
|
|
||||||
|
|
||||||
for rowKey, row in self.showsTable.rows.items(): # dict[RowKey, Row]
|
|
||||||
|
|
||||||
rowData = self.showsTable.get_row(rowKey)
|
|
||||||
|
|
||||||
try:
|
|
||||||
if ((showId == -1 and rowData[0] == ' ')
|
|
||||||
or showId == int(rowData[0])):
|
|
||||||
return int(self.showsTable.get_row_index(rowKey))
|
|
||||||
except:
|
|
||||||
continue
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def loadProperties(self):
|
|
||||||
|
|
||||||
self.__mediaFileProperties = FileProperties(self.context, self.__mediaFilename)
|
|
||||||
self.__sourceMediaDescriptor = self.__mediaFileProperties.getMediaDescriptor()
|
|
||||||
|
|
||||||
#HINT: This is None if the filename did not match anything in database
|
|
||||||
self.__currentPattern = self.__mediaFileProperties.getPattern()
|
|
||||||
|
|
||||||
# keine tags vorhanden
|
|
||||||
self.__targetMediaDescriptor = self.__currentPattern.getMediaDescriptor(self.context) if self.__currentPattern is not None else None
|
|
||||||
|
|
||||||
# Enumerating differences between media descriptors
|
|
||||||
# from file (=current) vs from stored in database (=target)
|
|
||||||
try:
|
|
||||||
mdcs = MediaDescriptorChangeSet(self.context,
|
|
||||||
self.__targetMediaDescriptor,
|
|
||||||
self.__sourceMediaDescriptor)
|
|
||||||
|
|
||||||
self.__mediaChangeSetObj = mdcs.getChangeSetObj()
|
|
||||||
except ValueError:
|
|
||||||
self.__mediaChangeSetObj = {}
|
|
||||||
|
|
||||||
|
|
||||||
def updateDifferences(self):
|
|
||||||
|
|
||||||
self.loadProperties()
|
|
||||||
|
|
||||||
self.differencesTable.clear()
|
|
||||||
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TAGS_KEY in self.__mediaChangeSetObj.keys():
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for tagKey, tagValue in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_ADDED_KEY].items():
|
|
||||||
if tagKey not in self.__ignoreGlobalKeys:
|
|
||||||
row = (f"add media tag: key='{tagKey}' value='{tagValue}'",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for tagKey, tagValue in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_REMOVED_KEY].items():
|
|
||||||
if tagKey not in self.__ignoreGlobalKeys and tagKey not in self.__removeGlobalKeys:
|
|
||||||
row = (f"remove media tag: key='{tagKey}' value='{tagValue}'",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for tagKey, tagValue in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_CHANGED_KEY].items():
|
|
||||||
if tagKey not in self.__ignoreGlobalKeys:
|
|
||||||
row = (f"change media tag: key='{tagKey}' value='{tagValue}'",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TRACKS_KEY in self.__mediaChangeSetObj.keys():
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
|
|
||||||
trackDescriptor: TrackDescriptor
|
|
||||||
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY].items():
|
|
||||||
row = (f"add {trackDescriptor.getType().label()} track: index={trackDescriptor.getIndex()} lang={trackDescriptor.getLanguage().threeLetter()}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_REMOVED_KEY].items():
|
|
||||||
row = (f"remove stream #{trackIndex}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
|
|
||||||
changedTracks: dict = self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_CHANGED_KEY]
|
|
||||||
|
|
||||||
targetTrackDescriptors = self.__targetMediaDescriptor.getTrackDescriptors()
|
|
||||||
|
|
||||||
trackDiffObj: dict
|
|
||||||
for trackIndex, trackDiffObj in changedTracks.items():
|
|
||||||
|
|
||||||
ttd: TrackDescriptor = targetTrackDescriptors[trackIndex]
|
|
||||||
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TAGS_KEY in trackDiffObj.keys():
|
|
||||||
|
|
||||||
removedTags = (trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_REMOVED_KEY]
|
|
||||||
if DIFF_REMOVED_KEY in trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
for tagKey, tagValue in removedTags.items():
|
|
||||||
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) remove key={tagKey} value={tagValue}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
addedTags = (trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_ADDED_KEY]
|
|
||||||
if DIFF_ADDED_KEY in trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
for tagKey, tagValue in addedTags.items():
|
|
||||||
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) add key={tagKey} value={tagValue}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
changedTags = (trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_CHANGED_KEY]
|
|
||||||
if DIFF_CHANGED_KEY in trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
for tagKey, tagValue in changedTags.items():
|
|
||||||
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) change key={tagKey} value={tagValue}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.DISPOSITION_SET_KEY in trackDiffObj.keys():
|
|
||||||
|
|
||||||
addedDispositions = (trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY][DIFF_ADDED_KEY]
|
|
||||||
if DIFF_ADDED_KEY in trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY].keys() else set())
|
|
||||||
for ad in addedDispositions:
|
|
||||||
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) add disposition={ad.label()}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
removedDispositions = (trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY][DIFF_REMOVED_KEY]
|
|
||||||
if DIFF_REMOVED_KEY in trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY].keys() else set())
|
|
||||||
for rd in removedDispositions:
|
|
||||||
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) remove disposition={rd.label()}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
if self.__currentPattern is None:
|
|
||||||
row = (' ', '<New show>', ' ') # Convert each element to a string before adding
|
|
||||||
self.showsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
for show in self.__sc.getAllShows():
|
|
||||||
row = (int(show.id), show.name, show.year) # Convert each element to a string before adding
|
|
||||||
self.showsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
for mediaTagKey, mediaTagValue in self.__sourceMediaDescriptor.getTags().items():
|
|
||||||
|
|
||||||
textColor = None
|
|
||||||
if mediaTagKey in self.__ignoreGlobalKeys:
|
|
||||||
textColor = 'blue'
|
|
||||||
if mediaTagKey in self.__removeGlobalKeys:
|
|
||||||
textColor = 'red'
|
|
||||||
|
|
||||||
row = (formatRichColor(mediaTagKey, textColor), formatRichColor(mediaTagValue, textColor)) # Convert each element to a string before adding
|
|
||||||
self.mediaTagsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
|
|
||||||
if self.__currentPattern is not None:
|
|
||||||
|
|
||||||
showIdentifier = self.__currentPattern.getShowId()
|
|
||||||
showRowIndex = self.getRowIndexFromShowId(showIdentifier)
|
|
||||||
if showRowIndex is not None:
|
|
||||||
self.showsTable.move_cursor(row=showRowIndex)
|
|
||||||
|
|
||||||
self.query_one("#pattern_input", Input).value = self.__currentPattern.getPattern()
|
|
||||||
|
|
||||||
self.updateDifferences()
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
self.query_one("#pattern_input", Input).value = self.__mediaFilename
|
|
||||||
self.highlightPattern(True)
|
|
||||||
|
|
||||||
|
|
||||||
def highlightPattern(self, state : bool):
|
|
||||||
if state:
|
|
||||||
self.query_one("#pattern_input", Input).styles.background = 'red'
|
|
||||||
else:
|
|
||||||
self.query_one("#pattern_input", Input).styles.background = None
|
|
||||||
|
|
||||||
|
|
||||||
def updateTracks(self):
|
|
||||||
|
|
||||||
self.tracksTable.clear()
|
|
||||||
|
|
||||||
# trackDescriptorList = self.__sourceMediaDescriptor.getAllTrackDescriptors()
|
|
||||||
trackDescriptorList = self.__sourceMediaDescriptor.getTrackDescriptors()
|
|
||||||
|
|
||||||
typeCounter = {}
|
|
||||||
|
|
||||||
for td in trackDescriptorList:
|
|
||||||
|
|
||||||
trackType = td.getType()
|
|
||||||
if not trackType in typeCounter.keys():
|
|
||||||
typeCounter[trackType] = 0
|
|
||||||
|
|
||||||
dispoSet = td.getDispositionSet()
|
|
||||||
audioLayout = td.getAudioLayout()
|
|
||||||
row = (td.getIndex(),
|
|
||||||
trackType.label(),
|
|
||||||
typeCounter[trackType],
|
|
||||||
td.getCodec().label(),
|
|
||||||
audioLayout.label() if trackType == TrackType.AUDIO
|
|
||||||
and audioLayout != AudioLayout.LAYOUT_UNDEFINED else ' ',
|
|
||||||
td.getLanguage().label(),
|
|
||||||
td.getTitle(),
|
|
||||||
'Yes' if TrackDisposition.DEFAULT in dispoSet else 'No',
|
|
||||||
'Yes' if TrackDisposition.FORCED in dispoSet else 'No')
|
|
||||||
|
|
||||||
self.tracksTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
typeCounter[trackType] += 1
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
# Create the DataTable widget
|
|
||||||
self.showsTable = DataTable(classes="two")
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_show_id = self.showsTable.add_column("ID", width=10)
|
|
||||||
self.column_key_show_name = self.showsTable.add_column("Name", width=80)
|
|
||||||
self.column_key_show_year = self.showsTable.add_column("Year", width=10)
|
|
||||||
|
|
||||||
self.showsTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
self.mediaTagsTable = DataTable(classes="two")
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_track_tag_key = self.mediaTagsTable.add_column("Key", width=30)
|
|
||||||
self.column_key_track_tag_value = self.mediaTagsTable.add_column("Value", width=70)
|
|
||||||
|
|
||||||
self.mediaTagsTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
self.tracksTable = DataTable(classes="two")
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_track_index = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_INDEX_COLUMN_LABEL, width=5)
|
|
||||||
self.column_key_track_type = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_TYPE_COLUMN_LABEL, width=10)
|
|
||||||
self.column_key_track_sub_index = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_SUB_INDEX_COLUMN_LABEL, width=8)
|
|
||||||
self.column_key_track_codec = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_CODEC_COLUMN_LABEL, width=10)
|
|
||||||
self.column_key_track_layout = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_LAYOUT_COLUMN_LABEL, width=10)
|
|
||||||
self.column_key_track_language = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_LANGUAGE_COLUMN_LABEL, width=15)
|
|
||||||
self.column_key_track_title = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_TITLE_COLUMN_LABEL, width=48)
|
|
||||||
self.column_key_track_default = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_DEFAULT_COLUMN_LABEL, width=8)
|
|
||||||
self.column_key_track_forced = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_FORCED_COLUMN_LABEL, width=8)
|
|
||||||
|
|
||||||
self.tracksTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
# Create the DataTable widget
|
|
||||||
self.differencesTable = DataTable(id='differences-table') # classes="triple"
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_differences = self.differencesTable.add_column(MediaDetailsScreen.DIFFERENCES_TABLE_DIFFERENCES_COLUMN_LABEL, width=100)
|
|
||||||
|
|
||||||
self.differencesTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
# 1
|
|
||||||
yield Static("Show")
|
|
||||||
yield self.showsTable
|
|
||||||
yield Static(" ")
|
|
||||||
yield self.differencesTable
|
|
||||||
|
|
||||||
# 2
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
# 3
|
|
||||||
yield Static(" ")
|
|
||||||
yield Button("Substitute", id="pattern_button")
|
|
||||||
yield Static(" ", classes="two")
|
|
||||||
|
|
||||||
# 4
|
|
||||||
yield Static("Pattern")
|
|
||||||
yield Input(type="text", id='pattern_input', classes="two")
|
|
||||||
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
# 5
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
# 6
|
|
||||||
yield Static("Media Tags")
|
|
||||||
yield self.mediaTagsTable
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
# 7
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
# 8
|
|
||||||
yield Static(" ")
|
|
||||||
yield Button("Set Default", id="select_default_button")
|
|
||||||
yield Button("Set Forced", id="select_forced_button")
|
|
||||||
yield Static(" ")
|
|
||||||
# 9
|
|
||||||
yield Static("Streams")
|
|
||||||
yield self.tracksTable
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
def getPatternObjFromInput(self):
|
|
||||||
"""Returns show id and pattern as obj from corresponding inputs"""
|
|
||||||
patternObj = {}
|
|
||||||
try:
|
|
||||||
patternObj['show_id'] = self.getSelectedShowDescriptor().getId()
|
|
||||||
patternObj['pattern'] = str(self.query_one("#pattern_input", Input).value)
|
|
||||||
except:
|
|
||||||
return {}
|
|
||||||
return patternObj
|
|
||||||
|
|
||||||
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "pattern_button":
|
|
||||||
|
|
||||||
pattern = self.query_one("#pattern_input", Input).value
|
|
||||||
|
|
||||||
patternMatch = re.search(FileProperties.SE_INDICATOR_PATTERN, pattern)
|
|
||||||
|
|
||||||
if patternMatch:
|
|
||||||
self.query_one("#pattern_input", Input).value = pattern.replace(patternMatch.group(1), FileProperties.SE_INDICATOR_PATTERN)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "select_default_button":
|
|
||||||
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
|
|
||||||
self.__sourceMediaDescriptor.setDefaultSubTrack(selectedTrackDescriptor.getType(), selectedTrackDescriptor.getSubIndex())
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
if event.button.id == "select_forced_button":
|
|
||||||
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
|
|
||||||
self.__sourceMediaDescriptor.setForcedSubTrack(selectedTrackDescriptor.getType(), selectedTrackDescriptor.getSubIndex())
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedTrackDescriptor(self):
|
|
||||||
"""Returns a partial track descriptor"""
|
|
||||||
try:
|
|
||||||
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.tracksTable.coordinate_to_cell_key(self.tracksTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_track_data = self.tracksTable.get_row(row_key)
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
kwargs[TrackDescriptor.CONTEXT_KEY] = self.context
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = int(selected_track_data[0])
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.fromLabel(selected_track_data[1])
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = int(selected_track_data[2])
|
|
||||||
kwargs[TrackDescriptor.CODEC_KEY] = TrackCodec.fromLabel(selected_track_data[3])
|
|
||||||
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = AudioLayout.fromLabel(selected_track_data[4])
|
|
||||||
|
|
||||||
return TrackDescriptor(**kwargs)
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedShowDescriptor(self) -> ShowDescriptor:
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
row_key, col_key = self.showsTable.coordinate_to_cell_key(self.showsTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_row_data = self.showsTable.get_row(row_key)
|
|
||||||
|
|
||||||
try:
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
kwargs[ShowDescriptor.ID_KEY] = int(selected_row_data[0])
|
|
||||||
kwargs[ShowDescriptor.NAME_KEY] = str(selected_row_data[1])
|
|
||||||
kwargs[ShowDescriptor.YEAR_KEY] = int(selected_row_data[2])
|
|
||||||
|
|
||||||
return ShowDescriptor(**kwargs)
|
|
||||||
|
|
||||||
except ValueError:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def handle_new_pattern(self, showDescriptor: ShowDescriptor):
|
|
||||||
""""""
|
|
||||||
|
|
||||||
if type(showDescriptor) is not ShowDescriptor:
|
|
||||||
raise TypeError("MediaDetailsScreen.handle_new_pattern(): Argument 'showDescriptor' has to be of type ShowDescriptor")
|
|
||||||
|
|
||||||
self.removeShow()
|
|
||||||
|
|
||||||
showRowIndex = self.getRowIndexFromShowId(showDescriptor.getId())
|
|
||||||
if showRowIndex is None:
|
|
||||||
show = (showDescriptor.getId(), showDescriptor.getName(), showDescriptor.getYear())
|
|
||||||
self.showsTable.add_row(*map(str, show))
|
|
||||||
|
|
||||||
showRowIndex = self.getRowIndexFromShowId(showDescriptor.getId())
|
|
||||||
if showRowIndex is not None:
|
|
||||||
self.showsTable.move_cursor(row=showRowIndex)
|
|
||||||
|
|
||||||
patternObj = self.getPatternObjFromInput()
|
|
||||||
|
|
||||||
if patternObj:
|
|
||||||
patternId = self.__pc.addPattern(patternObj)
|
|
||||||
if patternId:
|
|
||||||
self.highlightPattern(False)
|
|
||||||
|
|
||||||
for tagKey, tagValue in self.__sourceMediaDescriptor.getTags().items():
|
|
||||||
|
|
||||||
# Filter tags that make no sense to preserve
|
|
||||||
if tagKey not in self.__ignoreGlobalKeys and not tagKey in self.__removeGlobalKeys:
|
|
||||||
self.__tac.updateMediaTag(patternId, tagKey, tagValue)
|
|
||||||
|
|
||||||
# for trackDescriptor in self.__sourceMediaDescriptor.getAllTrackDescriptors():
|
|
||||||
for trackDescriptor in self.__sourceMediaDescriptor.getTrackDescriptors():
|
|
||||||
self.__tc.addTrack(trackDescriptor, patternId = patternId)
|
|
||||||
|
|
||||||
|
|
||||||
def action_new_pattern(self):
|
|
||||||
"""Adding new patterns
|
|
||||||
|
|
||||||
If the corresponding show does not exists in DB it is added beforehand"""
|
|
||||||
|
|
||||||
selectedShowDescriptor = self.getSelectedShowDescriptor()
|
|
||||||
|
|
||||||
#HINT: Callback is invoked after this method has exited. As a workaround the callback is executed directly
|
|
||||||
# from here with a mock-up screen result containing the necessary part of keys to perform correctly.
|
|
||||||
if selectedShowDescriptor is None:
|
|
||||||
self.app.push_screen(ShowDetailsScreen(), self.handle_new_pattern)
|
|
||||||
else:
|
|
||||||
self.handle_new_pattern(selectedShowDescriptor)
|
|
||||||
|
|
||||||
|
|
||||||
def action_update_pattern(self):
|
|
||||||
"""Updating patterns
|
|
||||||
|
|
||||||
When updating the database the actions must reverse the difference (eq to diff db->file)"""
|
|
||||||
|
|
||||||
if self.__currentPattern is not None:
|
|
||||||
patternObj = self.getPatternObjFromInput()
|
|
||||||
if (patternObj
|
|
||||||
and self.__currentPattern.getPattern() != patternObj['pattern']):
|
|
||||||
return self.__pc.updatePattern(self.__currentPattern.getId(), patternObj)
|
|
||||||
|
|
||||||
self.loadProperties()
|
|
||||||
|
|
||||||
# __mediaChangeSetObj is file vs database
|
|
||||||
if MediaDescriptorChangeSet.TAGS_KEY in self.__mediaChangeSetObj.keys():
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for addedTagKey in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_ADDED_KEY].keys():
|
|
||||||
# click.ClickException(f"delete media tag patternId={self.__currentPattern.getId()} addedTagKey={addedTagKey}")
|
|
||||||
self.__tac.deleteMediaTagByKey(self.__currentPattern.getId(), addedTagKey)
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for removedTagKey in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_REMOVED_KEY].keys():
|
|
||||||
currentTags = self.__sourceMediaDescriptor.getTags()
|
|
||||||
# click.ClickException(f"delete media tag patternId={self.__currentPattern.getId()} removedTagKey={removedTagKey} currentTags={currentTags[removedTagKey]}")
|
|
||||||
self.__tac.updateMediaTag(self.__currentPattern.getId(), removedTagKey, currentTags[removedTagKey])
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for changedTagKey in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_CHANGED_KEY].keys():
|
|
||||||
currentTags = self.__sourceMediaDescriptor.getTags()
|
|
||||||
# click.ClickException(f"delete media tag patternId={self.__currentPattern.getId()} changedTagKey={changedTagKey} currentTags={currentTags[changedTagKey]}")
|
|
||||||
self.__tac.updateMediaTag(self.__currentPattern.getId(), changedTagKey, currentTags[changedTagKey])
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TRACKS_KEY in self.__mediaChangeSetObj.keys():
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
|
|
||||||
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY].items():
|
|
||||||
#targetTracks = [t for t in self.__targetMediaDescriptor.getAllTrackDescriptors() if t.getIndex() == addedTrackIndex]
|
|
||||||
# if targetTracks:
|
|
||||||
# self.__tc.deleteTrack(targetTracks[0].getId()) # id
|
|
||||||
# self.__tc.deleteTrack(targetTracks[0].getId())
|
|
||||||
self.__tc.addTrack(trackDescriptor, patternId = self.__currentPattern.getId())
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
trackDescriptor: TrackDescriptor
|
|
||||||
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_REMOVED_KEY].items():
|
|
||||||
# Track per inspect/update hinzufügen
|
|
||||||
#self.__tc.addTrack(removedTrack, patternId = self.__currentPattern.getId())
|
|
||||||
self.__tc.deleteTrack(trackDescriptor.getId())
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
|
|
||||||
# [vsTracks[tp].getIndex()] = trackDiff
|
|
||||||
for trackIndex, trackDiff in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_CHANGED_KEY].items():
|
|
||||||
|
|
||||||
targetTracks = [t for t in self.__targetMediaDescriptor.getTrackDescriptors() if t.getIndex() == trackIndex]
|
|
||||||
targetTrackId = targetTracks[0].getId() if targetTracks else None
|
|
||||||
targetTrackIndex = targetTracks[0].getIndex() if targetTracks else None
|
|
||||||
|
|
||||||
changedCurrentTracks = [t for t in self.__sourceMediaDescriptor.getTrackDescriptors() if t.getIndex() == trackIndex]
|
|
||||||
# changedCurrentTrackId #HINT: Undefined as track descriptors do not come from file with track_id
|
|
||||||
|
|
||||||
if TrackDescriptor.TAGS_KEY in trackDiff.keys():
|
|
||||||
tagsDiff = trackDiff[TrackDescriptor.TAGS_KEY]
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in tagsDiff.keys():
|
|
||||||
for tagKey, tagValue in tagsDiff[DIFF_ADDED_KEY].items():
|
|
||||||
|
|
||||||
# if targetTracks:
|
|
||||||
# self.__tac.deleteTrackTagByKey(targetTrackId, addedTrackTagKey)
|
|
||||||
self.__tac.updateTrackTag(targetTrackId, tagKey, tagValue)
|
|
||||||
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in tagsDiff.keys():
|
|
||||||
for tagKey, tagValue in tagsDiff[DIFF_REMOVED_KEY].items():
|
|
||||||
# if changedCurrentTracks:
|
|
||||||
# self.__tac.updateTrackTag(targetTrackId, removedTrackTagKey, changedCurrentTracks[0].getTags()[removedTrackTagKey])
|
|
||||||
self.__tac.deleteTrackTagByKey(targetTrackId, tagKey)
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in tagsDiff.keys():
|
|
||||||
for tagKey, tagValue in tagsDiff[DIFF_CHANGED_KEY].items():
|
|
||||||
# if changedCurrentTracks:
|
|
||||||
# self.__tac.updateTrackTag(targetTrackId, changedTrackTagKey, changedCurrentTracks[0].getTags()[changedTrackTagKey])
|
|
||||||
self.__tac.updateTrackTag(targetTrackId, tagKey, tagValue)
|
|
||||||
|
|
||||||
|
|
||||||
if TrackDescriptor.DISPOSITION_SET_KEY in trackDiff.keys():
|
|
||||||
changedTrackDispositionDiff = trackDiff[TrackDescriptor.DISPOSITION_SET_KEY]
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in changedTrackDispositionDiff.keys():
|
|
||||||
for changedDisposition in changedTrackDispositionDiff[DIFF_ADDED_KEY]:
|
|
||||||
if targetTrackIndex is not None:
|
|
||||||
self.__tc.setDispositionState(self.__currentPattern.getId(), targetTrackIndex, changedDisposition, True)
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in changedTrackDispositionDiff.keys():
|
|
||||||
for changedDisposition in changedTrackDispositionDiff[DIFF_REMOVED_KEY]:
|
|
||||||
if targetTrackIndex is not None:
|
|
||||||
self.__tc.setDispositionState(self.__currentPattern.getId(), targetTrackIndex, changedDisposition, False)
|
|
||||||
|
|
||||||
|
|
||||||
self.updateDifferences()
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def action_edit_pattern(self):
|
|
||||||
|
|
||||||
patternObj = self.getPatternObjFromInput()
|
|
||||||
|
|
||||||
if patternObj['pattern']:
|
|
||||||
|
|
||||||
selectedPatternId = self.__pc.findPattern(patternObj)
|
|
||||||
|
|
||||||
if selectedPatternId is None:
|
|
||||||
raise click.ClickException(f"MediaDetailsScreen.action_edit_pattern(): Pattern to edit has no id")
|
|
||||||
|
|
||||||
self.app.push_screen(PatternDetailsScreen(patternId = selectedPatternId, showId = self.getSelectedShowDescriptor().getId()), self.handle_edit_pattern) # <-
|
|
||||||
|
|
||||||
|
|
||||||
def handle_edit_pattern(self, screenResult):
|
|
||||||
self.query_one("#pattern_input", Input).value = screenResult['pattern']
|
|
||||||
self.updateDifferences()
|
|
||||||
|
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
import os, sys, importlib, inspect, glob, re
|
|
||||||
|
|
||||||
from ffx.configuration_controller import ConfigurationController
|
|
||||||
from ffx.database import databaseContext
|
|
||||||
|
|
||||||
from sqlalchemy import Engine
|
|
||||||
from sqlalchemy.orm import sessionmaker
|
|
||||||
|
|
||||||
|
|
||||||
class Conversion():
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
|
|
||||||
self._context = {}
|
|
||||||
self._context['config'] = ConfigurationController()
|
|
||||||
|
|
||||||
self._context['database'] = databaseContext(databasePath=self._context['config'].getDatabaseFilePath())
|
|
||||||
|
|
||||||
self.__databaseSession: sessionmaker = self._context['database']['session']
|
|
||||||
self.__databaseEngine: Engine = self._context['database']['engine']
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
|
|
||||||
filenamePattern = re.compile("conversion_([0-9]+)_([0-9]+)\\.py")
|
|
||||||
|
|
||||||
filenameList = [os.path.basename(fp) for fp in glob.glob(f"{ basePath }/*.py") if fp != __file__]
|
|
||||||
|
|
||||||
versionTupleList = [(fm.group(1), fm.group(2)) for fn in filenameList if (fm := filenamePattern.search(fn))]
|
|
||||||
|
|
||||||
return versionTupleList
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(versionFrom, versionTo):
|
|
||||||
importlib.import_module(f"ffx.model.conversions.conversion_{ versionFrom }_{ versionTo }")
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[f"ffx.model.conversions.conversion_{ versionFrom }_{ versionTo }"]):
|
|
||||||
#HINT: Excluding DispositionCombination as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'Conversion' and name.startswith('Conversion'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [Conversion.getClassReference(verFrom, verTo) for verFrom, verTo in Conversion.list()]
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
import os, sys, importlib, inspect, glob, re
|
|
||||||
|
|
||||||
from .conversion import Conversion
|
|
||||||
|
|
||||||
|
|
||||||
class Conversion_2_3(Conversion):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
def applyConversion(self):
|
|
||||||
|
|
||||||
s = self.__databaseSession()
|
|
||||||
e = self.__databaseEngine
|
|
||||||
|
|
||||||
with e.connect() as c:
|
|
||||||
c.execute("ALTER TABLE user ADD COLUMN email VARCHAR(255)")
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
import os, sys, importlib, inspect, glob, re
|
|
||||||
|
|
||||||
from .conversion import Conversion
|
|
||||||
|
|
||||||
|
|
||||||
class Conversion_3_4(Conversion):
|
|
||||||
pass
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
# from typing import List
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey, Enum
|
|
||||||
from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
|
||||||
|
|
||||||
from .show import Base
|
|
||||||
|
|
||||||
|
|
||||||
class MediaTag(Base):
|
|
||||||
"""
|
|
||||||
relationship(argument, opt1, opt2, ...)
|
|
||||||
argument is string of class or Mapped class of the target entity
|
|
||||||
backref creates a bi-directional corresponding relationship (back_populates preferred)
|
|
||||||
back_populates points to the corresponding relationship (the actual class attribute identifier)
|
|
||||||
|
|
||||||
See: https://docs.sqlalchemy.org/en/(14|20)/orm/basic_relationships.html
|
|
||||||
"""
|
|
||||||
|
|
||||||
__tablename__ = 'media_tags'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
|
|
||||||
key = Column(String)
|
|
||||||
value = Column(String)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
pattern_id = Column(Integer, ForeignKey('patterns.id', ondelete="CASCADE"))
|
|
||||||
pattern = relationship('Pattern', back_populates='media_tags')
|
|
||||||
@@ -1,76 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from sqlalchemy import Column, Integer, String, ForeignKey
|
|
||||||
from sqlalchemy.orm import relationship
|
|
||||||
|
|
||||||
from .show import Base, Show
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
class Pattern(Base):
|
|
||||||
|
|
||||||
__tablename__ = 'patterns'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
pattern = Column(String)
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# id: Mapped[int] = mapped_column(Integer, primary_key=True)
|
|
||||||
# pattern: Mapped[str] = mapped_column(String, nullable=False)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
show_id = Column(Integer, ForeignKey('shows.id', ondelete="CASCADE"))
|
|
||||||
show = relationship(Show, back_populates='patterns', lazy='joined')
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# show_id: Mapped[int] = mapped_column(ForeignKey("shows.id", ondelete="CASCADE"))
|
|
||||||
# show: Mapped["Show"] = relationship(back_populates="patterns")
|
|
||||||
|
|
||||||
tracks = relationship('Track', back_populates='pattern', cascade="all, delete", lazy='joined')
|
|
||||||
|
|
||||||
|
|
||||||
media_tags = relationship('MediaTag', back_populates='pattern', cascade="all, delete", lazy='joined')
|
|
||||||
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return int(self.id)
|
|
||||||
|
|
||||||
def getShowId(self):
|
|
||||||
return int(self.show_id)
|
|
||||||
|
|
||||||
def getShowDescriptor(self, context) -> ShowDescriptor:
|
|
||||||
# click.echo(f"self.show {self.show} id={self.show_id}")
|
|
||||||
return self.show.getDescriptor(context)
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return int(self.id)
|
|
||||||
|
|
||||||
def getPattern(self):
|
|
||||||
return str(self.pattern)
|
|
||||||
|
|
||||||
def getTags(self):
|
|
||||||
return {str(t.key):str(t.value) for t in self.media_tags}
|
|
||||||
|
|
||||||
|
|
||||||
def getMediaDescriptor(self, context):
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
kwargs[MediaDescriptor.CONTEXT_KEY] = context
|
|
||||||
|
|
||||||
kwargs[MediaDescriptor.TAGS_KEY] = self.getTags()
|
|
||||||
kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY] = []
|
|
||||||
|
|
||||||
# Set ordered subindices
|
|
||||||
subIndexCounter = {}
|
|
||||||
for track in self.tracks:
|
|
||||||
trackType = track.getType()
|
|
||||||
if not trackType in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY].append(track.getDescriptor(context, subIndex = subIndexCounter[trackType]))
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
return MediaDescriptor(**kwargs)
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
# from typing import List
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey, Enum
|
|
||||||
from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
|
||||||
|
|
||||||
from .show import Base
|
|
||||||
|
|
||||||
|
|
||||||
class Property(Base):
|
|
||||||
|
|
||||||
__tablename__ = 'properties'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
|
|
||||||
key = Column(String)
|
|
||||||
value = Column(String)
|
|
||||||
@@ -1,71 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from sqlalchemy import Column, Integer, ForeignKey
|
|
||||||
from sqlalchemy.orm import relationship
|
|
||||||
|
|
||||||
from .show import Base, Show
|
|
||||||
|
|
||||||
|
|
||||||
class ShiftedSeason(Base):
|
|
||||||
|
|
||||||
__tablename__ = 'shifted_seasons'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# id: Mapped[int] = mapped_column(Integer, primary_key=True)
|
|
||||||
# pattern: Mapped[str] = mapped_column(String, nullable=False)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
show_id = Column(Integer, ForeignKey('shows.id', ondelete="CASCADE"))
|
|
||||||
show = relationship(Show, back_populates='shifted_seasons', lazy='joined')
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# show_id: Mapped[int] = mapped_column(ForeignKey("shows.id", ondelete="CASCADE"))
|
|
||||||
# show: Mapped["Show"] = relationship(back_populates="patterns")
|
|
||||||
|
|
||||||
|
|
||||||
original_season = Column(Integer)
|
|
||||||
|
|
||||||
first_episode = Column(Integer, default = -1)
|
|
||||||
last_episode = Column(Integer, default = -1)
|
|
||||||
|
|
||||||
season_offset = Column(Integer, default = 0)
|
|
||||||
episode_offset = Column(Integer, default = 0)
|
|
||||||
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return self.id
|
|
||||||
|
|
||||||
|
|
||||||
def getOriginalSeason(self):
|
|
||||||
return self.original_season
|
|
||||||
|
|
||||||
def getFirstEpisode(self):
|
|
||||||
return self.first_episode
|
|
||||||
|
|
||||||
def getLastEpisode(self):
|
|
||||||
return self.last_episode
|
|
||||||
|
|
||||||
|
|
||||||
def getSeasonOffset(self):
|
|
||||||
return self.season_offset
|
|
||||||
|
|
||||||
def getEpisodeOffset(self):
|
|
||||||
return self.episode_offset
|
|
||||||
|
|
||||||
|
|
||||||
def getObj(self):
|
|
||||||
|
|
||||||
shiftedSeasonObj = {}
|
|
||||||
|
|
||||||
shiftedSeasonObj['original_season'] = self.getOriginalSeason()
|
|
||||||
shiftedSeasonObj['first_episode'] = self.getFirstEpisode()
|
|
||||||
shiftedSeasonObj['last_episode'] = self.getLastEpisode()
|
|
||||||
shiftedSeasonObj['season_offset'] = self.getSeasonOffset()
|
|
||||||
shiftedSeasonObj['episode_offset'] = self.getEpisodeOffset()
|
|
||||||
|
|
||||||
return shiftedSeasonObj
|
|
||||||
|
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
# from typing import List
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey
|
|
||||||
from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
|
||||||
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
Base = declarative_base()
|
|
||||||
|
|
||||||
|
|
||||||
class Show(Base):
|
|
||||||
"""
|
|
||||||
relationship(argument, opt1, opt2, ...)
|
|
||||||
argument is string of class or Mapped class of the target entity
|
|
||||||
backref creates a bi-directional corresponding relationship (back_populates preferred)
|
|
||||||
back_populates points to the corresponding relationship (the actual class attribute identifier)
|
|
||||||
|
|
||||||
See: https://docs.sqlalchemy.org/en/(14|20)/orm/basic_relationships.html
|
|
||||||
"""
|
|
||||||
|
|
||||||
__tablename__ = 'shows'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
|
|
||||||
name = Column(String)
|
|
||||||
year = Column(Integer)
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# id: Mapped[int] = mapped_column(Integer, primary_key=True)
|
|
||||||
# name: Mapped[str] = mapped_column(String, nullable=False)
|
|
||||||
# year: Mapped[int] = mapped_column(Integer, nullable=False)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
#patterns = relationship('Pattern', back_populates='show', cascade="all, delete", passive_deletes=True)
|
|
||||||
patterns = relationship('Pattern', back_populates='show', cascade="all, delete")
|
|
||||||
# patterns = relationship('Pattern', back_populates='show', cascade="all")
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# patterns: Mapped[List["Pattern"]] = relationship(back_populates="show", cascade="all, delete")
|
|
||||||
|
|
||||||
shifted_seasons = relationship('ShiftedSeason', back_populates='show', cascade="all, delete")
|
|
||||||
|
|
||||||
|
|
||||||
index_season_digits = Column(Integer, default=ShowDescriptor.DEFAULT_INDEX_SEASON_DIGITS)
|
|
||||||
index_episode_digits = Column(Integer, default=ShowDescriptor.DEFAULT_INDEX_EPISODE_DIGITS)
|
|
||||||
indicator_season_digits = Column(Integer, default=ShowDescriptor.DEFAULT_INDICATOR_SEASON_DIGITS)
|
|
||||||
indicator_episode_digits = Column(Integer, default=ShowDescriptor.DEFAULT_INDICATOR_EPISODE_DIGITS)
|
|
||||||
|
|
||||||
|
|
||||||
def getDescriptor(self, context):
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
kwargs[ShowDescriptor.CONTEXT_KEY] = context
|
|
||||||
kwargs[ShowDescriptor.ID_KEY] = int(self.id)
|
|
||||||
kwargs[ShowDescriptor.NAME_KEY] = str(self.name)
|
|
||||||
kwargs[ShowDescriptor.YEAR_KEY] = int(self.year)
|
|
||||||
kwargs[ShowDescriptor.INDEX_SEASON_DIGITS_KEY] = int(self.index_season_digits)
|
|
||||||
kwargs[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY] = int(self.index_episode_digits)
|
|
||||||
kwargs[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY] = int(self.indicator_season_digits)
|
|
||||||
kwargs[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY] = int(self.indicator_episode_digits)
|
|
||||||
|
|
||||||
return ShowDescriptor(**kwargs)
|
|
||||||
@@ -1,216 +0,0 @@
|
|||||||
# from typing import List
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey
|
|
||||||
from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
|
||||||
|
|
||||||
from .show import Base
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.iso_language import IsoLanguage
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
from ffx.audio_layout import AudioLayout
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
|
|
||||||
|
|
||||||
class Track(Base):
|
|
||||||
"""
|
|
||||||
relationship(argument, opt1, opt2, ...)
|
|
||||||
argument is string of class or Mapped class of the target entity
|
|
||||||
backref creates a bi-directional corresponding relationship (back_populates preferred)
|
|
||||||
back_populates points to the corresponding relationship (the actual class attribute identifier)
|
|
||||||
|
|
||||||
See: https://docs.sqlalchemy.org/en/(14|20)/orm/basic_relationships.html
|
|
||||||
"""
|
|
||||||
|
|
||||||
__tablename__ = 'tracks'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True, autoincrement = True)
|
|
||||||
|
|
||||||
# P=pattern_id+sub_index+track_type
|
|
||||||
track_type = Column(Integer) # TrackType
|
|
||||||
|
|
||||||
index = Column(Integer)
|
|
||||||
source_index = Column(Integer)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
pattern_id = Column(Integer, ForeignKey('patterns.id', ondelete="CASCADE"))
|
|
||||||
pattern = relationship('Pattern', back_populates='tracks')
|
|
||||||
|
|
||||||
track_tags = relationship('TrackTag', back_populates='track', cascade="all, delete", lazy="joined")
|
|
||||||
|
|
||||||
disposition_flags = Column(Integer)
|
|
||||||
|
|
||||||
codec_name = Column(String)
|
|
||||||
audio_layout = Column(Integer)
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
trackType = kwargs.pop('track_type', None)
|
|
||||||
if trackType is not None:
|
|
||||||
self.track_type = int(trackType)
|
|
||||||
|
|
||||||
dispositionSet = kwargs.pop(TrackDescriptor.DISPOSITION_SET_KEY, set())
|
|
||||||
self.disposition_flags = int(TrackDisposition.toFlags(dispositionSet))
|
|
||||||
|
|
||||||
super().__init__(**kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def fromFfprobeStreamObj(cls, streamObj, patternId):
|
|
||||||
"""{
|
|
||||||
'index': 4,
|
|
||||||
'codec_name': 'hdmv_pgs_subtitle',
|
|
||||||
'codec_long_name': 'HDMV Presentation Graphic Stream subtitles',
|
|
||||||
'codec_type': 'subtitle',
|
|
||||||
'codec_tag_string': '[0][0][0][0]',
|
|
||||||
'codec_tag': '0x0000',
|
|
||||||
'r_frame_rate': '0/0',
|
|
||||||
'avg_frame_rate': '0/0',
|
|
||||||
'time_base': '1/1000',
|
|
||||||
'start_pts': 0,
|
|
||||||
'start_time': '0.000000',
|
|
||||||
'duration_ts': 1421035,
|
|
||||||
'duration': '1421.035000',
|
|
||||||
'disposition': {
|
|
||||||
'default': 1,
|
|
||||||
'dub': 0,
|
|
||||||
'original': 0,
|
|
||||||
'comment': 0,
|
|
||||||
'lyrics': 0,
|
|
||||||
'karaoke': 0,
|
|
||||||
'forced': 0,
|
|
||||||
'hearing_impaired': 0,
|
|
||||||
'visual_impaired': 0,
|
|
||||||
'clean_effects': 0,
|
|
||||||
'attached_pic': 0,
|
|
||||||
'timed_thumbnails': 0,
|
|
||||||
'non_diegetic': 0,
|
|
||||||
'captions': 0,
|
|
||||||
'descriptions': 0,
|
|
||||||
'metadata': 0,
|
|
||||||
'dependent': 0,
|
|
||||||
'still_image': 0
|
|
||||||
},
|
|
||||||
'tags': {
|
|
||||||
'language': 'ger',
|
|
||||||
'title': 'German Full'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True, autoincrement = True)
|
|
||||||
|
|
||||||
# P=pattern_id+sub_index+track_type
|
|
||||||
track_type = Column(Integer) # TrackType
|
|
||||||
sub_index = Column(Integer)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
pattern_id = Column(Integer, ForeignKey('patterns.id', ondelete='CASCADE'))
|
|
||||||
pattern = relationship('Pattern', back_populates='tracks')
|
|
||||||
|
|
||||||
|
|
||||||
language = Column(String) # IsoLanguage threeLetter
|
|
||||||
title = Column(String)
|
|
||||||
|
|
||||||
|
|
||||||
track_tags = relationship('TrackTag', back_populates='track', cascade='all, delete')
|
|
||||||
|
|
||||||
|
|
||||||
disposition_flags = Column(Integer)
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
trackType = streamObj[TrackDescriptor.FFPROBE_CODEC_TYPE_KEY]
|
|
||||||
|
|
||||||
if trackType in [t.label() for t in TrackType]:
|
|
||||||
|
|
||||||
return cls(pattern_id = patternId,
|
|
||||||
track_type = trackType,
|
|
||||||
codec_name = streamObj[TrackDescriptor.FFPROBE_CODEC_NAME_KEY],
|
|
||||||
disposition_flags = sum([2**t.index() for (k,v) in streamObj[TrackDescriptor.FFPROBE_DISPOSITION_KEY].items()
|
|
||||||
if v and (t := TrackDisposition.find(k)) is not None]),
|
|
||||||
audio_layout = AudioLayout.identify(streamObj))
|
|
||||||
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return int(self.id)
|
|
||||||
|
|
||||||
def getPatternId(self):
|
|
||||||
return int(self.pattern_id)
|
|
||||||
|
|
||||||
def getType(self):
|
|
||||||
return TrackType.fromIndex(self.track_type)
|
|
||||||
|
|
||||||
def getCodec(self) -> TrackCodec:
|
|
||||||
return TrackCodec.identify(self.codec_name)
|
|
||||||
|
|
||||||
def getIndex(self):
|
|
||||||
return int(self.index) if self.index is not None else -1
|
|
||||||
|
|
||||||
def getSourceIndex(self):
|
|
||||||
return int(self.source_index) if self.source_index is not None else -1
|
|
||||||
|
|
||||||
def getLanguage(self):
|
|
||||||
tags = {t.key:t.value for t in self.track_tags}
|
|
||||||
return IsoLanguage.findThreeLetter(tags['language']) if 'language' in tags.keys() else IsoLanguage.UNDEFINED
|
|
||||||
|
|
||||||
def getTitle(self):
|
|
||||||
tags = {t.key:t.value for t in self.track_tags}
|
|
||||||
return tags['title'] if 'title' in tags.keys() else ''
|
|
||||||
|
|
||||||
def getDispositionSet(self):
|
|
||||||
return TrackDisposition.toSet(self.disposition_flags)
|
|
||||||
|
|
||||||
def getAudioLayout(self):
|
|
||||||
return AudioLayout.fromIndex(self.audio_layout)
|
|
||||||
|
|
||||||
def getTags(self):
|
|
||||||
return {str(t.key):str(t.value) for t in self.track_tags}
|
|
||||||
|
|
||||||
|
|
||||||
def setDisposition(self, disposition : TrackDisposition):
|
|
||||||
self.disposition_flags = self.disposition_flags | int(2**disposition.index())
|
|
||||||
|
|
||||||
def resetDisposition(self, disposition : TrackDisposition):
|
|
||||||
self.disposition_flags = self.disposition_flags & sum([2**d.index() for d in TrackDisposition if d != disposition])
|
|
||||||
|
|
||||||
def getDisposition(self, disposition : TrackDisposition):
|
|
||||||
return bool(self.disposition_flags & 2**disposition.index())
|
|
||||||
|
|
||||||
|
|
||||||
def getDescriptor(self, context = None, subIndex : int = -1) -> TrackDescriptor:
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
if not context is None:
|
|
||||||
kwargs[TrackDescriptor.CONTEXT_KEY] = context
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.ID_KEY] = self.getId()
|
|
||||||
kwargs[TrackDescriptor.PATTERN_ID_KEY] = self.getPatternId()
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = self.getIndex()
|
|
||||||
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = self.getSourceIndex()
|
|
||||||
|
|
||||||
if subIndex > -1:
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = subIndex
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = self.getType()
|
|
||||||
kwargs[TrackDescriptor.CODEC_KEY] = self.getCodec()
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = self.getDispositionSet()
|
|
||||||
kwargs[TrackDescriptor.TAGS_KEY] = self.getTags()
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = self.getAudioLayout()
|
|
||||||
|
|
||||||
return TrackDescriptor(**kwargs)
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
# from typing import List
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey, Enum
|
|
||||||
from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
|
||||||
|
|
||||||
from .show import Base
|
|
||||||
|
|
||||||
|
|
||||||
class TrackTag(Base):
|
|
||||||
"""
|
|
||||||
relationship(argument, opt1, opt2, ...)
|
|
||||||
argument is string of class or Mapped class of the target entity
|
|
||||||
backref creates a bi-directional corresponding relationship (back_populates preferred)
|
|
||||||
back_populates points to the corresponding relationship (the actual class attribute identifier)
|
|
||||||
|
|
||||||
See: https://docs.sqlalchemy.org/en/(14|20)/orm/basic_relationships.html
|
|
||||||
"""
|
|
||||||
|
|
||||||
__tablename__ = 'track_tags'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
|
|
||||||
key = Column(String)
|
|
||||||
value = Column(String)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
track_id = Column(Integer, ForeignKey('tracks.id', ondelete="CASCADE"))
|
|
||||||
track = relationship('Track', back_populates='track_tags')
|
|
||||||
@@ -1,159 +0,0 @@
|
|||||||
import click, re
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
|
|
||||||
|
|
||||||
class PatternController():
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
|
|
||||||
def addPattern(self, patternObj):
|
|
||||||
"""Adds pattern to database from obj
|
|
||||||
|
|
||||||
Returns database id or 0 if pattern already exists"""
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Pattern).filter(Pattern.show_id == int(patternObj['show_id']),
|
|
||||||
Pattern.pattern == str(patternObj['pattern']))
|
|
||||||
|
|
||||||
if not q.count():
|
|
||||||
pattern = Pattern(show_id = int(patternObj['show_id']),
|
|
||||||
pattern = str(patternObj['pattern']))
|
|
||||||
s.add(pattern)
|
|
||||||
s.commit()
|
|
||||||
return pattern.getId()
|
|
||||||
else:
|
|
||||||
return 0
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.addPattern(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def updatePattern(self, patternId, patternObj):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Pattern).filter(Pattern.id == int(patternId))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
|
|
||||||
pattern = q.first()
|
|
||||||
|
|
||||||
pattern.show_id = int(patternObj['show_id'])
|
|
||||||
pattern.pattern = str(patternObj['pattern'])
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.updatePattern(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def findPattern(self, patternObj):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Pattern).filter(Pattern.show_id == int(patternObj['show_id']), Pattern.pattern == str(patternObj['pattern']))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
pattern = q.first()
|
|
||||||
return int(pattern.id)
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.findPattern(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def getPattern(self, patternId : int):
|
|
||||||
|
|
||||||
if type(patternId) is not int:
|
|
||||||
raise ValueError(f"PatternController.getPattern(): Argument patternId is required to be of type int")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Pattern).filter(Pattern.id == int(patternId))
|
|
||||||
|
|
||||||
return q.first() if q.count() else None
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.getPattern(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def deletePattern(self, patternId):
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Pattern).filter(Pattern.id == int(patternId))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
|
|
||||||
#DAFUQ: https://stackoverflow.com/a/19245058
|
|
||||||
# q.delete()
|
|
||||||
pattern = q.first()
|
|
||||||
s.delete(pattern)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.deletePattern(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def matchFilename(self, filename : str) -> dict:
|
|
||||||
"""Returns dict {'match': <a regex match obj>, 'pattern': <ffx pattern obj>} or empty dict of no pattern was found"""
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Pattern)
|
|
||||||
|
|
||||||
matchResult = {}
|
|
||||||
|
|
||||||
for pattern in q.all():
|
|
||||||
patternMatch = re.search(str(pattern.pattern), str(filename))
|
|
||||||
if patternMatch is not None:
|
|
||||||
matchResult['match'] = patternMatch
|
|
||||||
matchResult['pattern'] = pattern
|
|
||||||
|
|
||||||
return matchResult
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.matchFilename(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
# def getMediaDescriptor(self, context, patternId):
|
|
||||||
#
|
|
||||||
# try:
|
|
||||||
# s = self.Session()
|
|
||||||
# q = s.query(Pattern).filter(Pattern.id == int(patternId))
|
|
||||||
#
|
|
||||||
# if q.count():
|
|
||||||
# return q.first().getMediaDescriptor(context)
|
|
||||||
# else:
|
|
||||||
# return None
|
|
||||||
#
|
|
||||||
# except Exception as ex:
|
|
||||||
# raise click.ClickException(f"PatternController.getMediaDescriptor(): {repr(ex)}")
|
|
||||||
# finally:
|
|
||||||
# s.close()
|
|
||||||
@@ -1,111 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from .show_controller import ShowController
|
|
||||||
from .pattern_controller import PatternController
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class PatternDeleteScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 2;
|
|
||||||
grid-rows: 2 auto;
|
|
||||||
grid-columns: 30 330;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, patternId = None, showId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__pc = PatternController(context = self.context)
|
|
||||||
self.__sc = ShowController(context = self.context)
|
|
||||||
|
|
||||||
self.__patternId = patternId
|
|
||||||
self.__pattern: Pattern = self.__pc.getPattern(patternId) if patternId is not None else {}
|
|
||||||
self.__showDescriptor = self.__sc.getShowDescriptor(showId) if showId is not None else {}
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
if self.__showDescriptor:
|
|
||||||
self.query_one("#showlabel", Static).update(f"{self.__showDescriptor.getId()} - {self.__showDescriptor.getName()} ({self.__showDescriptor.getYear()})")
|
|
||||||
if not self.__pattern is None:
|
|
||||||
self.query_one("#patternlabel", Static).update(str(self.__pattern.pattern))
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
yield Static("Are you sure to delete the following filename pattern?", id="toplabel", classes="two")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Static("Pattern")
|
|
||||||
yield Static("", id="patternlabel")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Static("from show")
|
|
||||||
yield Static("", id="showlabel")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Button("Delete", id="delete_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "delete_button":
|
|
||||||
|
|
||||||
if self.__patternId is None:
|
|
||||||
raise click.ClickException('PatternDeleteScreen.on_button_pressed(): pattern id is undefined')
|
|
||||||
|
|
||||||
if self.__pc.deletePattern(self.__patternId):
|
|
||||||
self.dismiss(self.__pattern)
|
|
||||||
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
@@ -1,577 +0,0 @@
|
|||||||
import click, re
|
|
||||||
from typing import List
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, Input, DataTable
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
from ffx.model.track import Track
|
|
||||||
|
|
||||||
from .pattern_controller import PatternController
|
|
||||||
from .show_controller import ShowController
|
|
||||||
from .track_controller import TrackController
|
|
||||||
from .tag_controller import TagController
|
|
||||||
|
|
||||||
from .track_details_screen import TrackDetailsScreen
|
|
||||||
from .track_delete_screen import TrackDeleteScreen
|
|
||||||
|
|
||||||
from .tag_details_screen import TagDetailsScreen
|
|
||||||
from .tag_delete_screen import TagDeleteScreen
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
from textual.widgets._data_table import CellDoesNotExist
|
|
||||||
|
|
||||||
from ffx.file_properties import FileProperties
|
|
||||||
from ffx.iso_language import IsoLanguage
|
|
||||||
from ffx.audio_layout import AudioLayout
|
|
||||||
|
|
||||||
from ffx.helper import formatRichColor, removeRichColor
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class PatternDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 7 13;
|
|
||||||
grid-rows: 2 2 2 2 2 8 2 2 8 2 2 2 2;
|
|
||||||
grid-columns: 25 25 25 25 25 25 25;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
.six {
|
|
||||||
column-span: 6;
|
|
||||||
}
|
|
||||||
.seven {
|
|
||||||
column-span: 7;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
|
|
||||||
.yellow {
|
|
||||||
tint: yellow 40%;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, patternId = None, showId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__configurationData = self.context['config'].getData()
|
|
||||||
|
|
||||||
metadataConfiguration = self.__configurationData['metadata'] if 'metadata' in self.__configurationData.keys() else {}
|
|
||||||
|
|
||||||
self.__signatureTags = metadataConfiguration['signature'] if 'signature' in metadataConfiguration.keys() else {}
|
|
||||||
self.__removeGlobalKeys = metadataConfiguration['remove'] if 'remove' in metadataConfiguration.keys() else []
|
|
||||||
self.__ignoreGlobalKeys = metadataConfiguration['ignore'] if 'ignore' in metadataConfiguration.keys() else []
|
|
||||||
self.__removeTrackKeys = (metadataConfiguration['streams']['remove']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'remove' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
self.__ignoreTrackKeys = (metadataConfiguration['streams']['ignore']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'ignore' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
|
|
||||||
self.__pc = PatternController(context = self.context)
|
|
||||||
self.__sc = ShowController(context = self.context)
|
|
||||||
self.__tc = TrackController(context = self.context)
|
|
||||||
self.__tac = TagController(context = self.context)
|
|
||||||
|
|
||||||
self.__pattern : Pattern = self.__pc.getPattern(patternId) if patternId is not None else None
|
|
||||||
self.__showDescriptor = self.__sc.getShowDescriptor(showId) if showId is not None else None
|
|
||||||
|
|
||||||
|
|
||||||
#TODO: per controller
|
|
||||||
def loadTracks(self, show_id):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
tracks = {}
|
|
||||||
tracks['audio'] = {}
|
|
||||||
tracks['subtitle'] = {}
|
|
||||||
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Pattern).filter(Pattern.show_id == int(show_id))
|
|
||||||
|
|
||||||
return [{'id': int(p.id), 'pattern': p.pattern} for p in q.all()]
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"loadTracks(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def updateTracks(self):
|
|
||||||
|
|
||||||
self.tracksTable.clear()
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
|
|
||||||
tracks = self.__tc.findTracks(self.__pattern.getId())
|
|
||||||
|
|
||||||
typeCounter = {}
|
|
||||||
|
|
||||||
tr: Track
|
|
||||||
for tr in tracks:
|
|
||||||
|
|
||||||
td : TrackDescriptor = tr.getDescriptor(self.context)
|
|
||||||
|
|
||||||
trackType = td.getType()
|
|
||||||
if not trackType in typeCounter.keys():
|
|
||||||
typeCounter[trackType] = 0
|
|
||||||
|
|
||||||
dispoSet = td.getDispositionSet()
|
|
||||||
|
|
||||||
trackLanguage = td.getLanguage()
|
|
||||||
audioLayout = td.getAudioLayout()
|
|
||||||
|
|
||||||
row = (td.getIndex(),
|
|
||||||
trackType.label(),
|
|
||||||
typeCounter[trackType],
|
|
||||||
td.getCodec().label(),
|
|
||||||
audioLayout.label() if trackType == TrackType.AUDIO
|
|
||||||
and audioLayout != AudioLayout.LAYOUT_UNDEFINED else ' ',
|
|
||||||
trackLanguage.label() if trackLanguage != IsoLanguage.UNDEFINED else ' ',
|
|
||||||
td.getTitle(),
|
|
||||||
'Yes' if TrackDisposition.DEFAULT in dispoSet else 'No',
|
|
||||||
'Yes' if TrackDisposition.FORCED in dispoSet else 'No',
|
|
||||||
td.getSourceIndex())
|
|
||||||
|
|
||||||
self.tracksTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
typeCounter[trackType] += 1
|
|
||||||
|
|
||||||
|
|
||||||
def swapTracks(self, trackIndex1: int, trackIndex2: int):
|
|
||||||
|
|
||||||
ti1 = int(trackIndex1)
|
|
||||||
ti2 = int(trackIndex2)
|
|
||||||
|
|
||||||
siblingDescriptors: List[TrackDescriptor] = self.__tc.findSiblingDescriptors(self.__pattern.getId())
|
|
||||||
|
|
||||||
numSiblings = len(siblingDescriptors)
|
|
||||||
|
|
||||||
if ti1 < 0 or ti1 >= numSiblings:
|
|
||||||
raise ValueError(f"PatternDetailsScreen.swapTracks(): trackIndex1 ({ti1}) is out of range ({numSiblings})")
|
|
||||||
|
|
||||||
if ti2 < 0 or ti2 >= numSiblings:
|
|
||||||
raise ValueError(f"PatternDetailsScreen.swapTracks(): trackIndex2 ({ti2}) is out of range ({numSiblings})")
|
|
||||||
|
|
||||||
sibling1 = siblingDescriptors[trackIndex1]
|
|
||||||
sibling2 = siblingDescriptors[trackIndex2]
|
|
||||||
|
|
||||||
# raise click.ClickException(f"siblings id1={sibling1.getId()} id2={sibling2.getId()}")
|
|
||||||
|
|
||||||
subIndex2 = sibling2.getSubIndex()
|
|
||||||
|
|
||||||
sibling2.setIndex(sibling1.getIndex())
|
|
||||||
sibling2.setSubIndex(sibling1.getSubIndex())
|
|
||||||
|
|
||||||
sibling1.setIndex(trackIndex2)
|
|
||||||
sibling1.setSubIndex(subIndex2)
|
|
||||||
|
|
||||||
if not self.__tc.updateTrack(sibling1.getId(), sibling1):
|
|
||||||
raise click.ClickException('Update sibling1 failed')
|
|
||||||
if not self.__tc.updateTrack(sibling2.getId(), sibling2):
|
|
||||||
raise click.ClickException('Update sibling2 failed')
|
|
||||||
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
|
|
||||||
def updateTags(self):
|
|
||||||
|
|
||||||
self.tagsTable.clear()
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
|
|
||||||
tags = self.__tac.findAllMediaTags(self.__pattern.getId())
|
|
||||||
|
|
||||||
for tagKey, tagValue in tags.items():
|
|
||||||
|
|
||||||
textColor = None
|
|
||||||
if tagKey in self.__ignoreGlobalKeys:
|
|
||||||
textColor = 'blue'
|
|
||||||
if tagKey in self.__removeGlobalKeys:
|
|
||||||
textColor = 'red'
|
|
||||||
|
|
||||||
# if tagKey not in self.__ignoreTrackKeys:
|
|
||||||
row = (formatRichColor(tagKey, textColor), formatRichColor(tagValue, textColor))
|
|
||||||
self.tagsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
self.query_one("#showlabel", Static).update(f"{self.__showDescriptor.getId()} - {self.__showDescriptor.getName()} ({self.__showDescriptor.getYear()})")
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
|
|
||||||
self.query_one("#pattern_input", Input).value = str(self.__pattern.getPattern())
|
|
||||||
|
|
||||||
self.updateTags()
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
|
|
||||||
self.tagsTable = DataTable(classes="seven")
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_tag_key = self.tagsTable.add_column("Key", width=50)
|
|
||||||
self.column_key_tag_value = self.tagsTable.add_column("Value", width=100)
|
|
||||||
|
|
||||||
self.tagsTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
self.tracksTable = DataTable(id="tracks_table", classes="seven")
|
|
||||||
|
|
||||||
self.column_key_track_index = self.tracksTable.add_column("Index", width=5)
|
|
||||||
self.column_key_track_type = self.tracksTable.add_column("Type", width=10)
|
|
||||||
self.column_key_track_sub_index = self.tracksTable.add_column("SubIndex", width=8)
|
|
||||||
self.column_key_track_codec = self.tracksTable.add_column("Codec", width=10)
|
|
||||||
self.column_key_track_audio_layout = self.tracksTable.add_column("Layout", width=10)
|
|
||||||
self.column_key_track_language = self.tracksTable.add_column("Language", width=15)
|
|
||||||
self.column_key_track_title = self.tracksTable.add_column("Title", width=48)
|
|
||||||
self.column_key_track_default = self.tracksTable.add_column("Default", width=8)
|
|
||||||
self.column_key_track_forced = self.tracksTable.add_column("Forced", width=8)
|
|
||||||
self.column_key_track_source_index = self.tracksTable.add_column("SrcIndex", width=8)
|
|
||||||
|
|
||||||
self.tracksTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
# 1
|
|
||||||
yield Static("Edit filename pattern" if self.__pattern is not None else "New filename pattern", id="toplabel")
|
|
||||||
yield Input(type="text", id="pattern_input", classes="six")
|
|
||||||
|
|
||||||
# 2
|
|
||||||
yield Static("from show")
|
|
||||||
yield Static("", id="showlabel", classes="five")
|
|
||||||
yield Button("Substitute pattern", id="pattern_button")
|
|
||||||
|
|
||||||
# 3
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
# 4
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
# 5
|
|
||||||
yield Static("Media Tags")
|
|
||||||
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
yield Button("Add", id="button_add_tag")
|
|
||||||
yield Button("Edit", id="button_edit_tag")
|
|
||||||
yield Button("Delete", id="button_delete_tag")
|
|
||||||
else:
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
# 6
|
|
||||||
yield self.tagsTable
|
|
||||||
|
|
||||||
# 7
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
# 8
|
|
||||||
yield Static("Streams")
|
|
||||||
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
yield Button("Add", id="button_add_track")
|
|
||||||
yield Button("Edit", id="button_edit_track")
|
|
||||||
yield Button("Delete", id="button_delete_track")
|
|
||||||
else:
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
yield Static(" ")
|
|
||||||
yield Button("Up", id="button_track_up")
|
|
||||||
yield Button("Down", id="button_track_down")
|
|
||||||
|
|
||||||
# 9
|
|
||||||
yield self.tracksTable
|
|
||||||
|
|
||||||
# 10
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
# 11
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
# 12
|
|
||||||
yield Button("Save", id="save_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 13
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
def getPatternFromInput(self):
|
|
||||||
return str(self.query_one("#pattern_input", Input).value)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedTrackDescriptor(self):
|
|
||||||
|
|
||||||
if not self.__pattern:
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.tracksTable.coordinate_to_cell_key(self.tracksTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_track_data = self.tracksTable.get_row(row_key)
|
|
||||||
|
|
||||||
trackIndex = int(selected_track_data[0])
|
|
||||||
trackSubIndex = int(selected_track_data[2])
|
|
||||||
|
|
||||||
return self.__tc.getTrack(self.__pattern.getId(), trackIndex).getDescriptor(self.context, subIndex=trackSubIndex)
|
|
||||||
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedTag(self):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.tagsTable.coordinate_to_cell_key(self.tagsTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_tag_data = self.tagsTable.get_row(row_key)
|
|
||||||
|
|
||||||
tagKey = removeRichColor(selected_tag_data[0])
|
|
||||||
tagValue = removeRichColor(selected_tag_data[1])
|
|
||||||
|
|
||||||
return tagKey, tagValue
|
|
||||||
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
# Check if the button pressed is the one we are interested in
|
|
||||||
if event.button.id == "save_button":
|
|
||||||
|
|
||||||
patternDescriptor = {}
|
|
||||||
patternDescriptor['show_id'] = self.__showDescriptor.getId()
|
|
||||||
patternDescriptor['pattern'] = self.getPatternFromInput()
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
|
|
||||||
if self.__pc.updatePattern(self.__pattern.getId(), patternDescriptor):
|
|
||||||
self.dismiss(patternDescriptor)
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
else:
|
|
||||||
patternId = self.__pc.addPattern(patternDescriptor)
|
|
||||||
if patternId:
|
|
||||||
self.dismiss(patternDescriptor)
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
|
|
||||||
# Save pattern when just created before adding streams
|
|
||||||
if self.__pattern is not None:
|
|
||||||
|
|
||||||
numTracks = len(self.tracksTable.rows)
|
|
||||||
|
|
||||||
if event.button.id == "button_add_track":
|
|
||||||
self.app.push_screen(TrackDetailsScreen(patternId = self.__pattern.getId(), index = numTracks), self.handle_add_track)
|
|
||||||
|
|
||||||
selectedTrack = self.getSelectedTrackDescriptor()
|
|
||||||
if selectedTrack is not None:
|
|
||||||
if event.button.id == "button_edit_track":
|
|
||||||
self.app.push_screen(TrackDetailsScreen(trackDescriptor = selectedTrack), self.handle_edit_track)
|
|
||||||
if event.button.id == "button_delete_track":
|
|
||||||
self.app.push_screen(TrackDeleteScreen(trackDescriptor = selectedTrack), self.handle_delete_track)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "button_add_tag":
|
|
||||||
if self.__pattern is not None:
|
|
||||||
self.app.push_screen(TagDetailsScreen(), self.handle_update_tag)
|
|
||||||
|
|
||||||
if event.button.id == "button_edit_tag":
|
|
||||||
tagKey, tagValue = self.getSelectedTag()
|
|
||||||
self.app.push_screen(TagDetailsScreen(key=tagKey, value=tagValue), self.handle_update_tag)
|
|
||||||
|
|
||||||
if event.button.id == "button_delete_tag":
|
|
||||||
tagKey, tagValue = self.getSelectedTag()
|
|
||||||
self.app.push_screen(TagDeleteScreen(key=tagKey, value=tagValue), self.handle_delete_tag)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "pattern_button":
|
|
||||||
|
|
||||||
pattern = self.query_one("#pattern_input", Input).value
|
|
||||||
|
|
||||||
patternMatch = re.search(FileProperties.SE_INDICATOR_PATTERN, pattern)
|
|
||||||
|
|
||||||
if patternMatch:
|
|
||||||
self.query_one("#pattern_input", Input).value = pattern.replace(patternMatch.group(1),
|
|
||||||
FileProperties.SE_INDICATOR_PATTERN)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "button_track_up":
|
|
||||||
|
|
||||||
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
|
|
||||||
selectedTrackIndex = selectedTrackDescriptor.getIndex()
|
|
||||||
|
|
||||||
if selectedTrackIndex > 0 and selectedTrackIndex < self.tracksTable.row_count:
|
|
||||||
correspondingTrackIndex = selectedTrackIndex - 1
|
|
||||||
self.swapTracks(selectedTrackIndex, correspondingTrackIndex)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "button_track_down":
|
|
||||||
|
|
||||||
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
|
|
||||||
selectedTrackIndex = selectedTrackDescriptor.getIndex()
|
|
||||||
|
|
||||||
if selectedTrackIndex >= 0 and selectedTrackIndex < (self.tracksTable.row_count - 1):
|
|
||||||
correspondingTrackIndex = selectedTrackIndex + 1
|
|
||||||
self.swapTracks(selectedTrackIndex, correspondingTrackIndex)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_add_track(self, trackDescriptor : TrackDescriptor):
|
|
||||||
|
|
||||||
dispoSet = trackDescriptor.getDispositionSet()
|
|
||||||
trackType = trackDescriptor.getType()
|
|
||||||
index = trackDescriptor.getIndex()
|
|
||||||
subIndex = trackDescriptor.getSubIndex()
|
|
||||||
codec = trackDescriptor.getCodec()
|
|
||||||
language = trackDescriptor.getLanguage()
|
|
||||||
title = trackDescriptor.getTitle()
|
|
||||||
|
|
||||||
row = (index,
|
|
||||||
trackType.label(),
|
|
||||||
subIndex,
|
|
||||||
codec.label(),
|
|
||||||
language.label(),
|
|
||||||
title,
|
|
||||||
'Yes' if TrackDisposition.DEFAULT in dispoSet else 'No',
|
|
||||||
'Yes' if TrackDisposition.FORCED in dispoSet else 'No')
|
|
||||||
|
|
||||||
self.tracksTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
def handle_edit_track(self, trackDescriptor : TrackDescriptor):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
row_key, col_key = self.tracksTable.coordinate_to_cell_key(self.tracksTable.cursor_coordinate)
|
|
||||||
|
|
||||||
self.tracksTable.update_cell(row_key, self.column_key_track_audio_layout,
|
|
||||||
trackDescriptor.getAudioLayout().label()
|
|
||||||
if trackDescriptor.getType() == TrackType.AUDIO else ' ')
|
|
||||||
|
|
||||||
self.tracksTable.update_cell(row_key, self.column_key_track_language, trackDescriptor.getLanguage().label())
|
|
||||||
self.tracksTable.update_cell(row_key, self.column_key_track_title, trackDescriptor.getTitle())
|
|
||||||
self.tracksTable.update_cell(row_key, self.column_key_track_default,
|
|
||||||
'Yes' if TrackDisposition.DEFAULT in trackDescriptor.getDispositionSet() else 'No')
|
|
||||||
self.tracksTable.update_cell(row_key, self.column_key_track_forced,
|
|
||||||
'Yes' if TrackDisposition.FORCED in trackDescriptor.getDispositionSet() else 'No')
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def handle_delete_track(self, trackDescriptor : TrackDescriptor):
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def handle_update_tag(self, tag):
|
|
||||||
|
|
||||||
if self.__pattern is None:
|
|
||||||
raise click.ClickException(f"PatternDetailsScreen.handle_update_tag: pattern not set")
|
|
||||||
|
|
||||||
if self.__tac.updateMediaTag(self.__pattern.getId(), tag[0], tag[1]) is not None:
|
|
||||||
self.updateTags()
|
|
||||||
|
|
||||||
def handle_delete_tag(self, tag):
|
|
||||||
|
|
||||||
if self.__pattern is None:
|
|
||||||
raise click.ClickException(f"PatternDetailsScreen.handle_delete_tag: pattern not set")
|
|
||||||
|
|
||||||
if self.__tac.deleteMediaTagByKey(self.__pattern.getId(), tag[0]):
|
|
||||||
self.updateTags()
|
|
||||||
else:
|
|
||||||
raise click.ClickException('tag delete failed')
|
|
||||||
@@ -1,39 +0,0 @@
|
|||||||
import subprocess, logging
|
|
||||||
from typing import List
|
|
||||||
|
|
||||||
def executeProcess(commandSequence: List[str], directory: str = None, context: dict = None):
|
|
||||||
"""
|
|
||||||
niceness -20 bis +19
|
|
||||||
cpu_percent: 1 bis 99
|
|
||||||
"""
|
|
||||||
|
|
||||||
if context is None:
|
|
||||||
logger = logging.getLogger('FFX')
|
|
||||||
logger.addHandler(logging.NullHandler())
|
|
||||||
else:
|
|
||||||
logger = context['logger']
|
|
||||||
|
|
||||||
niceSequence = []
|
|
||||||
|
|
||||||
niceness = (int(context['resource_limits']['niceness'])
|
|
||||||
if not context is None
|
|
||||||
and 'resource_limits' in context.keys()
|
|
||||||
and 'niceness' in context['resource_limits'].keys() else 99)
|
|
||||||
cpu_percent = (int(context['resource_limits']['cpu_percent'])
|
|
||||||
if not context is None
|
|
||||||
and 'resource_limits' in context.keys()
|
|
||||||
and 'cpu_percent' in context['resource_limits'].keys() else 0)
|
|
||||||
|
|
||||||
if niceness >= -20 and niceness <= 19:
|
|
||||||
niceSequence += ['nice', '-n', str(niceness)]
|
|
||||||
if cpu_percent >= 1:
|
|
||||||
niceSequence += ['cpulimit', '-l', str(cpu_percent), '--']
|
|
||||||
|
|
||||||
niceCommand = niceSequence + commandSequence
|
|
||||||
|
|
||||||
logger.debug(f"executeProcess() command sequence: {' '.join(niceCommand)}")
|
|
||||||
|
|
||||||
process = subprocess.Popen(niceCommand, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='utf-8', cwd = directory)
|
|
||||||
output, error = process.communicate()
|
|
||||||
|
|
||||||
return output, error, process.returncode
|
|
||||||
@@ -1,12 +0,0 @@
|
|||||||
from textual.app import ComposeResult
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Footer, Placeholder
|
|
||||||
|
|
||||||
|
|
||||||
class SettingsScreen(Screen):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
context = self.app.getContext()
|
|
||||||
def compose(self) -> ComposeResult:
|
|
||||||
yield Placeholder("Settings Screen")
|
|
||||||
yield Footer()
|
|
||||||
@@ -1,233 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.model.shifted_season import ShiftedSeason
|
|
||||||
|
|
||||||
|
|
||||||
class EpisodeOrderException(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
class RangeOverlapException(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class ShiftedSeasonController():
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
def checkShiftedSeason(self, showId: int, shiftedSeasonObj: dict, shiftedSeasonId: int = 0):
|
|
||||||
"""
|
|
||||||
Check if for a particula season
|
|
||||||
|
|
||||||
shiftedSeasonId
|
|
||||||
"""
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
originalSeason = shiftedSeasonObj['original_season']
|
|
||||||
firstEpisode = int(shiftedSeasonObj['first_episode'])
|
|
||||||
lastEpisode = int(shiftedSeasonObj['last_episode'])
|
|
||||||
|
|
||||||
q = s.query(ShiftedSeason).filter(ShiftedSeason.show_id == int(showId))
|
|
||||||
if shiftedSeasonId:
|
|
||||||
q = q.filter(ShiftedSeason.id != int(shiftedSeasonId))
|
|
||||||
|
|
||||||
siblingShiftedSeason: ShiftedSeason
|
|
||||||
for siblingShiftedSeason in q.all():
|
|
||||||
|
|
||||||
siblingOriginalSeason = siblingShiftedSeason.getOriginalSeason
|
|
||||||
siblingFirstEpisode = siblingShiftedSeason.getFirstEpisode()
|
|
||||||
siblingLastEpisode = siblingShiftedSeason.getLastEpisode()
|
|
||||||
|
|
||||||
if (originalSeason == siblingOriginalSeason
|
|
||||||
and lastEpisode >= siblingFirstEpisode
|
|
||||||
and siblingLastEpisode >= firstEpisode):
|
|
||||||
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShiftedSeasonController.addShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def addShiftedSeason(self, showId: int, shiftedSeasonObj: dict):
|
|
||||||
|
|
||||||
if type(showId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.addShiftedSeason(): Argument showId is required to be of type int")
|
|
||||||
|
|
||||||
if type(shiftedSeasonObj) is not dict:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.addShiftedSeason(): Argument shiftedSeasonObj is required to be of type dict")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
firstEpisode = int(shiftedSeasonObj['first_episode'])
|
|
||||||
lastEpisode = int(shiftedSeasonObj['last_episode'])
|
|
||||||
|
|
||||||
if lastEpisode < firstEpisode:
|
|
||||||
raise EpisodeOrderException()
|
|
||||||
|
|
||||||
q = s.query(ShiftedSeason).filter(ShiftedSeason.show_id == int(showId))
|
|
||||||
|
|
||||||
shiftedSeason = ShiftedSeason(show_id = int(showId),
|
|
||||||
original_season = int(shiftedSeasonObj['original_season']),
|
|
||||||
first_episode = firstEpisode,
|
|
||||||
last_episode = lastEpisode,
|
|
||||||
season_offset = int(shiftedSeasonObj['season_offset']),
|
|
||||||
episode_offset = int(shiftedSeasonObj['episode_offset']))
|
|
||||||
s.add(shiftedSeason)
|
|
||||||
s.commit()
|
|
||||||
return shiftedSeason.getId()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShiftedSeasonController.addShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def updateShiftedSeason(self, shiftedSeasonId: int, shiftedSeasonObj: dict):
|
|
||||||
|
|
||||||
if type(shiftedSeasonId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.updateShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
|
||||||
|
|
||||||
if type(shiftedSeasonObj) is not dict:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.updateShiftedSeason(): Argument shiftedSeasonObj is required to be of type dict")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(ShiftedSeason).filter(ShiftedSeason.id == int(shiftedSeasonId))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
|
|
||||||
shiftedSeason = q.first()
|
|
||||||
|
|
||||||
shiftedSeason.original_season = int(shiftedSeasonObj['original_season'])
|
|
||||||
shiftedSeason.first_episode = int(shiftedSeasonObj['first_episode'])
|
|
||||||
shiftedSeason.last_episode = int(shiftedSeasonObj['last_episode'])
|
|
||||||
shiftedSeason.season_offset = int(shiftedSeasonObj['season_offset'])
|
|
||||||
shiftedSeason.episode_offset = int(shiftedSeasonObj['episode_offset'])
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShiftedSeasonController.updateShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def findShiftedSeason(self, showId: int, originalSeason: int, firstEpisode: int, lastEpisode: int):
|
|
||||||
|
|
||||||
if type(showId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
|
||||||
|
|
||||||
if type(originalSeason) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument originalSeason is required to be of type int")
|
|
||||||
|
|
||||||
if type(firstEpisode) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument firstEpisode is required to be of type int")
|
|
||||||
|
|
||||||
if type(lastEpisode) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument lastEpisode is required to be of type int")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(ShiftedSeason).filter(ShiftedSeason.show_id == int(showId),
|
|
||||||
ShiftedSeason.original_season == int(originalSeason),
|
|
||||||
ShiftedSeason.first_episode == int(firstEpisode),
|
|
||||||
ShiftedSeason.last_episode == int(lastEpisode))
|
|
||||||
|
|
||||||
return q.first().getId() if q.count() else None
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.findShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def getShiftedSeasonSiblings(self, showId: int):
|
|
||||||
|
|
||||||
if type(showId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.getShiftedSeasonSiblings(): Argument shiftedSeasonId is required to be of type int")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(ShiftedSeason).filter(ShiftedSeason.show_id == int(showId))
|
|
||||||
|
|
||||||
return q.all()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.getShiftedSeasonSiblings(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def getShiftedSeason(self, shiftedSeasonId: int):
|
|
||||||
|
|
||||||
if type(shiftedSeasonId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.getShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(ShiftedSeason).filter(ShiftedSeason.id == int(shiftedSeasonId))
|
|
||||||
|
|
||||||
return q.first() if q.count() else None
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShiftedSeasonController.getShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def deleteShiftedSeason(self, shiftedSeasonId):
|
|
||||||
|
|
||||||
if type(shiftedSeasonId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.deleteShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(ShiftedSeason).filter(ShiftedSeason.id == int(shiftedSeasonId))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
|
|
||||||
#DAFUQ: https://stackoverflow.com/a/19245058
|
|
||||||
# q.delete()
|
|
||||||
shiftedSeason = q.first()
|
|
||||||
s.delete(shiftedSeason)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShiftedSeasonController.deleteShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def shiftSeason(self, showId, season, episode):
|
|
||||||
|
|
||||||
shiftedSeasonEntry: ShiftedSeason
|
|
||||||
for shiftedSeasonEntry in self.getShiftedSeasonSiblings(showId):
|
|
||||||
|
|
||||||
if (season == shiftedSeasonEntry.getOriginalSeason()
|
|
||||||
and (shiftedSeasonEntry.getFirstEpisode() == -1 or episode >= shiftedSeasonEntry.getFirstEpisode())
|
|
||||||
and (shiftedSeasonEntry.getLastEpisode() == -1 or episode <= shiftedSeasonEntry.getLastEpisode())):
|
|
||||||
|
|
||||||
shiftedSeason = season + shiftedSeasonEntry.getSeasonOffset()
|
|
||||||
shiftedEpisode = episode + shiftedSeasonEntry.getEpisodeOffset()
|
|
||||||
|
|
||||||
self.context['logger'].info(f"Shifting season: {season} episode: {episode} "
|
|
||||||
+f"-> season: {shiftedSeason} episode: {shiftedEpisode}")
|
|
||||||
|
|
||||||
return shiftedSeason, shiftedEpisode
|
|
||||||
|
|
||||||
return season, episode
|
|
||||||
@@ -1,125 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from .shifted_season_controller import ShiftedSeasonController
|
|
||||||
|
|
||||||
from ffx.model.shifted_season import ShiftedSeason
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class ShiftedSeasonDeleteScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 2;
|
|
||||||
grid-rows: 2 auto;
|
|
||||||
grid-columns: 30 330;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, showId = None, shiftedSeasonId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__ssc = ShiftedSeasonController(context = self.context)
|
|
||||||
|
|
||||||
self._showId = showId
|
|
||||||
self.__shiftedSeasonId = shiftedSeasonId
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
shiftedSeason: ShiftedSeason = self.__ssc.getShiftedSeason(self.__shiftedSeasonId)
|
|
||||||
|
|
||||||
self.query_one("#static_show_id", Static).update(str(self._showId))
|
|
||||||
self.query_one("#static_original_season", Static).update(str(shiftedSeason.getOriginalSeason()))
|
|
||||||
self.query_one("#static_first_episode", Static).update(str(shiftedSeason.getFirstEpisode()))
|
|
||||||
self.query_one("#static_last_episode", Static).update(str(shiftedSeason.getLastEpisode()))
|
|
||||||
self.query_one("#static_season_offset", Static).update(str(shiftedSeason.getSeasonOffset()))
|
|
||||||
self.query_one("#static_episode_offset", Static).update(str(shiftedSeason.getEpisodeOffset()))
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
yield Static("Are you sure to delete the following shifted season?", id="toplabel", classes="two")
|
|
||||||
|
|
||||||
yield Static(" ", classes="two")
|
|
||||||
|
|
||||||
yield Static("from show")
|
|
||||||
yield Static(" ", id="static_show_id")
|
|
||||||
|
|
||||||
yield Static(" ", classes="two")
|
|
||||||
|
|
||||||
yield Static("Original season")
|
|
||||||
yield Static(" ", id="static_original_season")
|
|
||||||
|
|
||||||
yield Static("First episode")
|
|
||||||
yield Static(" ", id="static_first_episode")
|
|
||||||
|
|
||||||
yield Static("Last episode")
|
|
||||||
yield Static(" ", id="static_last_episode")
|
|
||||||
|
|
||||||
yield Static("Season offset")
|
|
||||||
yield Static(" ", id="static_season_offset")
|
|
||||||
|
|
||||||
yield Static("Episode offset")
|
|
||||||
yield Static(" ", id="static_episode_offset")
|
|
||||||
|
|
||||||
yield Static(" ", classes="two")
|
|
||||||
|
|
||||||
yield Button("Delete", id="delete_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "delete_button":
|
|
||||||
|
|
||||||
if self.__shiftedSeasonId is None:
|
|
||||||
raise click.ClickException('ShiftedSeasonDeleteScreen.on_button_pressed(): shifted season id is undefined')
|
|
||||||
|
|
||||||
if self.__ssc.deleteShiftedSeason(self.__shiftedSeasonId):
|
|
||||||
self.dismiss(self.__shiftedSeasonId)
|
|
||||||
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
@@ -1,221 +0,0 @@
|
|||||||
from typing import List
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, Input
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from .shifted_season_controller import ShiftedSeasonController
|
|
||||||
|
|
||||||
from ffx.model.shifted_season import ShiftedSeason
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class ShiftedSeasonDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 3 10;
|
|
||||||
grid-rows: 2 2 2 2 2 2 2 2 2 2;
|
|
||||||
grid-columns: 40 40 40;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
.six {
|
|
||||||
column-span: 6;
|
|
||||||
}
|
|
||||||
.seven {
|
|
||||||
column-span: 7;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
|
|
||||||
.yellow {
|
|
||||||
tint: yellow 40%;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, showId = None, shiftedSeasonId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__ssc = ShiftedSeasonController(context = self.context)
|
|
||||||
|
|
||||||
self.__showId = showId
|
|
||||||
self.__shiftedSeasonId = shiftedSeasonId
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
if self.__shiftedSeasonId is not None:
|
|
||||||
shiftedSeason: ShiftedSeason = self.__ssc.getShiftedSeason(self.__shiftedSeasonId)
|
|
||||||
|
|
||||||
originalSeason = shiftedSeason.getOriginalSeason()
|
|
||||||
self.query_one("#input_original_season", Input).value = str(originalSeason)
|
|
||||||
|
|
||||||
firstEpisode = shiftedSeason.getFirstEpisode()
|
|
||||||
self.query_one("#input_first_episode", Input).value = str(firstEpisode) if firstEpisode != -1 else ''
|
|
||||||
|
|
||||||
lastEpisode = shiftedSeason.getLastEpisode()
|
|
||||||
self.query_one("#input_last_episode", Input).value = str(lastEpisode) if lastEpisode != -1 else ''
|
|
||||||
|
|
||||||
seasonOffset = shiftedSeason.getSeasonOffset()
|
|
||||||
self.query_one("#input_season_offset", Input).value = str(seasonOffset) if seasonOffset else ''
|
|
||||||
|
|
||||||
episodeOffset = shiftedSeason.getEpisodeOffset()
|
|
||||||
self.query_one("#input_episode_offset", Input).value = str(episodeOffset) if episodeOffset else ''
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
# 1
|
|
||||||
yield Static("Edit shifted season" if self.__shiftedSeasonId is not None else "New shifted season", id="toplabel", classes="three")
|
|
||||||
|
|
||||||
# 2
|
|
||||||
yield Static(" ", classes="three")
|
|
||||||
|
|
||||||
# 3
|
|
||||||
yield Static("Original season")
|
|
||||||
yield Input(id="input_original_season", classes="two")
|
|
||||||
|
|
||||||
# 4
|
|
||||||
yield Static("First Episode")
|
|
||||||
yield Input(id="input_first_episode", classes="two")
|
|
||||||
|
|
||||||
# 5
|
|
||||||
yield Static("Last Episode")
|
|
||||||
yield Input(id="input_last_episode", classes="two")
|
|
||||||
|
|
||||||
# 6
|
|
||||||
yield Static("Season offset")
|
|
||||||
yield Input(id="input_season_offset", classes="two")
|
|
||||||
|
|
||||||
# 7
|
|
||||||
yield Static("Episode offset")
|
|
||||||
yield Input(id="input_episode_offset", classes="two")
|
|
||||||
|
|
||||||
# 8
|
|
||||||
yield Static(" ", classes="three")
|
|
||||||
|
|
||||||
# 9
|
|
||||||
yield Button("Save", id="save_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
# 10
|
|
||||||
yield Static(" ", classes="three")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
def getShiftedSeasonObjFromInput(self):
|
|
||||||
|
|
||||||
shiftedSeasonObj = {}
|
|
||||||
|
|
||||||
originalSeason = self.query_one("#input_original_season", Input).value
|
|
||||||
if not originalSeason:
|
|
||||||
return None
|
|
||||||
shiftedSeasonObj['original_season'] = int(originalSeason)
|
|
||||||
|
|
||||||
try:
|
|
||||||
shiftedSeasonObj['first_episode'] = int(self.query_one("#input_first_episode", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
shiftedSeasonObj['first_episode'] = -1
|
|
||||||
|
|
||||||
try:
|
|
||||||
shiftedSeasonObj['last_episode'] = int(self.query_one("#input_last_episode", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
shiftedSeasonObj['last_episode'] = -1
|
|
||||||
|
|
||||||
try:
|
|
||||||
shiftedSeasonObj['season_offset'] = int(self.query_one("#input_season_offset", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
shiftedSeasonObj['season_offset'] = 0
|
|
||||||
|
|
||||||
try:
|
|
||||||
shiftedSeasonObj['episode_offset'] = int(self.query_one("#input_episode_offset", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
shiftedSeasonObj['episode_offset'] = 0
|
|
||||||
|
|
||||||
return shiftedSeasonObj
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
# Check if the button pressed is the one we are interested in
|
|
||||||
if event.button.id == "save_button":
|
|
||||||
|
|
||||||
shiftedSeasonObj = self.getShiftedSeasonObjFromInput()
|
|
||||||
|
|
||||||
if shiftedSeasonObj is not None:
|
|
||||||
|
|
||||||
if self.__shiftedSeasonId is not None:
|
|
||||||
|
|
||||||
if self.__ssc.checkShiftedSeason(self.__showId, shiftedSeasonObj,
|
|
||||||
shiftedSeasonId = self.__shiftedSeasonId):
|
|
||||||
if self.__ssc.updateShiftedSeason(self.__shiftedSeasonId, shiftedSeasonObj):
|
|
||||||
self.dismiss((self.__shiftedSeasonId, shiftedSeasonObj))
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
else:
|
|
||||||
if self.__ssc.checkShiftedSeason(self.__showId, shiftedSeasonObj):
|
|
||||||
self.__shiftedSeasonId = self.__ssc.addShiftedSeason(self.__showId, shiftedSeasonObj)
|
|
||||||
self.dismiss((self.__shiftedSeasonId, shiftedSeasonObj))
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
@@ -1,133 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.model.show import Show
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
class ShowController():
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
|
|
||||||
def getShowDescriptor(self, showId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Show).filter(Show.id == showId)
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
show: Show = q.first()
|
|
||||||
return show.getDescriptor(self.context)
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShowController.getShowDescriptor(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def getShow(self, showId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Show).filter(Show.id == showId)
|
|
||||||
|
|
||||||
return q.first() if q.count() else None
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShowController.getShow(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def getAllShows(self):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Show)
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
return q.all()
|
|
||||||
else:
|
|
||||||
return []
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShowController.getAllShows(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def updateShow(self, showDescriptor: ShowDescriptor):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Show).filter(Show.id == showDescriptor.getId())
|
|
||||||
|
|
||||||
if not q.count():
|
|
||||||
show = Show(id = int(showDescriptor.getId()),
|
|
||||||
name = str(showDescriptor.getName()),
|
|
||||||
year = int(showDescriptor.getYear()),
|
|
||||||
index_season_digits = showDescriptor.getIndexSeasonDigits(),
|
|
||||||
index_episode_digits = showDescriptor.getIndexEpisodeDigits(),
|
|
||||||
indicator_season_digits = showDescriptor.getIndicatorSeasonDigits(),
|
|
||||||
indicator_episode_digits = showDescriptor.getIndicatorEpisodeDigits())
|
|
||||||
|
|
||||||
s.add(show)
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
|
|
||||||
currentShow = q.first()
|
|
||||||
|
|
||||||
changed = False
|
|
||||||
if currentShow.name != str(showDescriptor.getName()):
|
|
||||||
currentShow.name = str(showDescriptor.getName())
|
|
||||||
changed = True
|
|
||||||
if currentShow.year != int(showDescriptor.getYear()):
|
|
||||||
currentShow.year = int(showDescriptor.getYear())
|
|
||||||
changed = True
|
|
||||||
|
|
||||||
if currentShow.index_season_digits != int(showDescriptor.getIndexSeasonDigits()):
|
|
||||||
currentShow.index_season_digits = int(showDescriptor.getIndexSeasonDigits())
|
|
||||||
changed = True
|
|
||||||
if currentShow.index_episode_digits != int(showDescriptor.getIndexEpisodeDigits()):
|
|
||||||
currentShow.index_episode_digits = int(showDescriptor.getIndexEpisodeDigits())
|
|
||||||
changed = True
|
|
||||||
if currentShow.indicator_season_digits != int(showDescriptor.getIndicatorSeasonDigits()):
|
|
||||||
currentShow.indicator_season_digits = int(showDescriptor.getIndicatorSeasonDigits())
|
|
||||||
changed = True
|
|
||||||
if currentShow.indicator_episode_digits != int(showDescriptor.getIndicatorEpisodeDigits()):
|
|
||||||
currentShow.indicator_episode_digits = int(showDescriptor.getIndicatorEpisodeDigits())
|
|
||||||
changed = True
|
|
||||||
|
|
||||||
if changed:
|
|
||||||
s.commit()
|
|
||||||
return changed
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShowController.updateShow(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def deleteShow(self, show_id):
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Show).filter(Show.id == int(show_id))
|
|
||||||
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
|
|
||||||
#DAFUQ: https://stackoverflow.com/a/19245058
|
|
||||||
# q.delete()
|
|
||||||
show = q.first()
|
|
||||||
s.delete(show)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShowController.deleteShow(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
@@ -1,95 +0,0 @@
|
|||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from .show_controller import ShowController
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class ShowDeleteScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 2;
|
|
||||||
grid-rows: 2 auto;
|
|
||||||
grid-columns: 30 auto;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, showId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__sc = ShowController(context = self.context)
|
|
||||||
|
|
||||||
self.__showDescriptor = self.__sc.getShowDescriptor(showId) if showId is not None else {}
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
self.query_one("#showlabel", Static).update(f"{self.__showDescriptor.getId()} - {self.__showDescriptor.getName()} ({self.__showDescriptor.getYear()})")
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
yield Static("Are you sure to delete the following show?", id="toplabel", classes="two")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Static("", id="showlabel")
|
|
||||||
yield Static("")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Button("Delete", id="delete_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "delete_button":
|
|
||||||
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
if self.__sc.deleteShow(self.__showDescriptor.getId()):
|
|
||||||
self.dismiss(self.__showDescriptor)
|
|
||||||
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
@@ -1,106 +0,0 @@
|
|||||||
import logging
|
|
||||||
|
|
||||||
|
|
||||||
class ShowDescriptor():
|
|
||||||
"""This class represents the structural content of a media file including streams and metadata"""
|
|
||||||
|
|
||||||
CONTEXT_KEY = 'context'
|
|
||||||
|
|
||||||
ID_KEY = 'id'
|
|
||||||
NAME_KEY = 'name'
|
|
||||||
YEAR_KEY = 'year'
|
|
||||||
|
|
||||||
INDEX_SEASON_DIGITS_KEY = 'index_season_digits'
|
|
||||||
INDEX_EPISODE_DIGITS_KEY = 'index_episode_digits'
|
|
||||||
INDICATOR_SEASON_DIGITS_KEY = 'indicator_season_digits'
|
|
||||||
INDICATOR_EPISODE_DIGITS_KEY = 'indicator_episode_digits'
|
|
||||||
|
|
||||||
DEFAULT_INDEX_SEASON_DIGITS = 2
|
|
||||||
DEFAULT_INDEX_EPISODE_DIGITS = 2
|
|
||||||
DEFAULT_INDICATOR_SEASON_DIGITS = 2
|
|
||||||
DEFAULT_INDICATOR_EPISODE_DIGITS = 2
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
if ShowDescriptor.CONTEXT_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.CONTEXT_KEY]) is not dict:
|
|
||||||
raise TypeError(
|
|
||||||
f"ShowDescriptor.__init__(): Argument {ShowDescriptor.CONTEXT_KEY} is required to be of type dict"
|
|
||||||
)
|
|
||||||
self.__context = kwargs[ShowDescriptor.CONTEXT_KEY]
|
|
||||||
self.__logger = self.__context['logger']
|
|
||||||
else:
|
|
||||||
self.__context = {}
|
|
||||||
self.__logger = logging.getLogger('FFX')
|
|
||||||
self.__logger.addHandler(logging.NullHandler())
|
|
||||||
|
|
||||||
if ShowDescriptor.ID_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.ID_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.ID_KEY} is required to be of type int")
|
|
||||||
self.__showId = kwargs[ShowDescriptor.ID_KEY]
|
|
||||||
else:
|
|
||||||
self.__showId = -1
|
|
||||||
|
|
||||||
if ShowDescriptor.NAME_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.NAME_KEY]) is not str:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.NAME_KEY} is required to be of type str")
|
|
||||||
self.__showName = kwargs[ShowDescriptor.NAME_KEY]
|
|
||||||
else:
|
|
||||||
self.__showName = ''
|
|
||||||
|
|
||||||
if ShowDescriptor.YEAR_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.YEAR_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.YEAR_KEY} is required to be of type int")
|
|
||||||
self.__showYear = kwargs[ShowDescriptor.YEAR_KEY]
|
|
||||||
else:
|
|
||||||
self.__showYear = -1
|
|
||||||
|
|
||||||
|
|
||||||
if ShowDescriptor.INDEX_SEASON_DIGITS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.INDEX_SEASON_DIGITS_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.INDEX_SEASON_DIGITS_KEY} is required to be of type int")
|
|
||||||
self.__indexSeasonDigits = kwargs[ShowDescriptor.INDEX_SEASON_DIGITS_KEY]
|
|
||||||
else:
|
|
||||||
self.__indexSeasonDigits = ShowDescriptor.DEFAULT_INDEX_SEASON_DIGITS
|
|
||||||
|
|
||||||
if ShowDescriptor.INDEX_EPISODE_DIGITS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.INDEX_EPISODE_DIGITS_KEY} is required to be of type int")
|
|
||||||
self.__indexEpisodeDigits = kwargs[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY]
|
|
||||||
else:
|
|
||||||
self.__indexEpisodeDigits = ShowDescriptor.DEFAULT_INDEX_EPISODE_DIGITS
|
|
||||||
|
|
||||||
if ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY} is required to be of type int")
|
|
||||||
self.__indicatorSeasonDigits = kwargs[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY]
|
|
||||||
else:
|
|
||||||
self.__indicatorSeasonDigits = ShowDescriptor.DEFAULT_INDICATOR_SEASON_DIGITS
|
|
||||||
|
|
||||||
if ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY} is required to be of type int")
|
|
||||||
self.__indicatorEpisodeDigits = kwargs[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY]
|
|
||||||
else:
|
|
||||||
self.__indicatorEpisodeDigits = ShowDescriptor.DEFAULT_INDICATOR_EPISODE_DIGITS
|
|
||||||
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return self.__showId
|
|
||||||
def getName(self):
|
|
||||||
return self.__showName
|
|
||||||
def getYear(self):
|
|
||||||
return self.__showYear
|
|
||||||
|
|
||||||
def getIndexSeasonDigits(self):
|
|
||||||
return self.__indexSeasonDigits
|
|
||||||
def getIndexEpisodeDigits(self):
|
|
||||||
return self.__indexEpisodeDigits
|
|
||||||
def getIndicatorSeasonDigits(self):
|
|
||||||
return self.__indicatorSeasonDigits
|
|
||||||
def getIndicatorEpisodeDigits(self):
|
|
||||||
return self.__indicatorEpisodeDigits
|
|
||||||
|
|
||||||
def getFilenamePrefix(self):
|
|
||||||
return f"{self.__showName} ({str(self.__showYear)})"
|
|
||||||
@@ -1,492 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, DataTable, Input
|
|
||||||
from textual.containers import Grid
|
|
||||||
from textual.widgets._data_table import CellDoesNotExist
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
|
|
||||||
from .pattern_details_screen import PatternDetailsScreen
|
|
||||||
from .pattern_delete_screen import PatternDeleteScreen
|
|
||||||
|
|
||||||
from .show_controller import ShowController
|
|
||||||
from .pattern_controller import PatternController
|
|
||||||
from .tmdb_controller import TmdbController
|
|
||||||
from .shifted_season_controller import ShiftedSeasonController
|
|
||||||
|
|
||||||
from .show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
from .shifted_season_details_screen import ShiftedSeasonDetailsScreen
|
|
||||||
from .shifted_season_delete_screen import ShiftedSeasonDeleteScreen
|
|
||||||
|
|
||||||
from ffx.model.shifted_season import ShiftedSeason
|
|
||||||
|
|
||||||
from .helper import filterFilename
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class ShowDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 5 16;
|
|
||||||
grid-rows: 2 2 2 2 2 2 2 2 2 2 2 9 2 9 2 2;
|
|
||||||
grid-columns: 30 30 30 30 30;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
column-span: 2;
|
|
||||||
min-height: 8;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
BINDINGS = [
|
|
||||||
("a", "add_pattern", "Add Pattern"),
|
|
||||||
("e", "edit_pattern", "Edit Pattern"),
|
|
||||||
("r", "remove_pattern", "Remove Pattern"),
|
|
||||||
]
|
|
||||||
|
|
||||||
def __init__(self, showId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__sc = ShowController(context = self.context)
|
|
||||||
self.__pc = PatternController(context = self.context)
|
|
||||||
self.__tc = TmdbController()
|
|
||||||
self.__ssc = ShiftedSeasonController(context = self.context)
|
|
||||||
|
|
||||||
self.__showDescriptor = self.__sc.getShowDescriptor(showId) if showId is not None else None
|
|
||||||
|
|
||||||
|
|
||||||
def loadPatterns(self, show_id : int):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Pattern).filter(Pattern.show_id == int(show_id))
|
|
||||||
|
|
||||||
return [{'id': int(p.id), 'pattern': str(p.pattern)} for p in q.all()]
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShowDetailsScreen.loadPatterns(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def updateShiftedSeasons(self):
|
|
||||||
|
|
||||||
self.shiftedSeasonsTable.clear()
|
|
||||||
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
|
|
||||||
showId = int(self.__showDescriptor.getId())
|
|
||||||
|
|
||||||
shiftedSeason: ShiftedSeason
|
|
||||||
for shiftedSeason in self.__ssc.getShiftedSeasonSiblings(showId=showId):
|
|
||||||
|
|
||||||
shiftedSeasonObj = shiftedSeason.getObj()
|
|
||||||
|
|
||||||
firstEpisode = shiftedSeasonObj['first_episode']
|
|
||||||
firstEpisodeStr = str(firstEpisode) if firstEpisode != -1 else ''
|
|
||||||
|
|
||||||
lastEpisode = shiftedSeasonObj['last_episode']
|
|
||||||
lastEpisodeStr = str(lastEpisode) if lastEpisode != -1 else ''
|
|
||||||
|
|
||||||
row = (shiftedSeasonObj['original_season'],
|
|
||||||
firstEpisodeStr,
|
|
||||||
lastEpisodeStr,
|
|
||||||
shiftedSeasonObj['season_offset'],
|
|
||||||
shiftedSeasonObj['episode_offset'])
|
|
||||||
|
|
||||||
self.shiftedSeasonsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
if self.__showDescriptor is not None:
|
|
||||||
|
|
||||||
showId = int(self.__showDescriptor.getId())
|
|
||||||
|
|
||||||
self.query_one("#id_static", Static).update(str(showId))
|
|
||||||
self.query_one("#name_input", Input).value = str(self.__showDescriptor.getName())
|
|
||||||
self.query_one("#year_input", Input).value = str(self.__showDescriptor.getYear())
|
|
||||||
|
|
||||||
self.query_one("#index_season_digits_input", Input).value = str(self.__showDescriptor.getIndexSeasonDigits())
|
|
||||||
self.query_one("#index_episode_digits_input", Input).value = str(self.__showDescriptor.getIndexEpisodeDigits())
|
|
||||||
self.query_one("#indicator_season_digits_input", Input).value = str(self.__showDescriptor.getIndicatorSeasonDigits())
|
|
||||||
self.query_one("#indicator_episode_digits_input", Input).value = str(self.__showDescriptor.getIndicatorEpisodeDigits())
|
|
||||||
|
|
||||||
|
|
||||||
#raise click.ClickException(f"show_id {showId}")
|
|
||||||
patternList = self.loadPatterns(showId)
|
|
||||||
# raise click.ClickException(f"patternList {patternList}")
|
|
||||||
for pattern in patternList:
|
|
||||||
row = (pattern['pattern'],)
|
|
||||||
self.patternTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
self.updateShiftedSeasons()
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
self.query_one("#index_season_digits_input", Input).value = "2"
|
|
||||||
self.query_one("#index_episode_digits_input", Input).value = "2"
|
|
||||||
self.query_one("#indicator_season_digits_input", Input).value = "2"
|
|
||||||
self.query_one("#indicator_episode_digits_input", Input).value = "2"
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedPatternDescriptor(self):
|
|
||||||
|
|
||||||
selectedPattern = {}
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.patternTable.coordinate_to_cell_key(self.patternTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_row_data = self.patternTable.get_row(row_key)
|
|
||||||
|
|
||||||
selectedPattern['show_id'] = self.__showDescriptor.getId()
|
|
||||||
selectedPattern['pattern'] = str(selected_row_data[0])
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return selectedPattern
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedShiftedSeasonObjFromInput(self):
|
|
||||||
|
|
||||||
shiftedSeasonObj = {}
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.shiftedSeasonsTable.coordinate_to_cell_key(self.shiftedSeasonsTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_row_data = self.shiftedSeasonsTable.get_row(row_key)
|
|
||||||
|
|
||||||
shiftedSeasonObj['original_season'] = int(selected_row_data[0])
|
|
||||||
shiftedSeasonObj['first_episode'] = int(selected_row_data[1]) if selected_row_data[1].isnumeric() else -1
|
|
||||||
shiftedSeasonObj['last_episode'] = int(selected_row_data[2]) if selected_row_data[2].isnumeric() else -1
|
|
||||||
shiftedSeasonObj['season_offset'] = int(selected_row_data[3]) if selected_row_data[3].isnumeric() else 0
|
|
||||||
shiftedSeasonObj['episode_offset'] = int(selected_row_data[4]) if selected_row_data[4].isnumeric() else 0
|
|
||||||
|
|
||||||
|
|
||||||
if self.__showDescriptor is not None:
|
|
||||||
|
|
||||||
showId = int(self.__showDescriptor.getId())
|
|
||||||
|
|
||||||
shiftedSeasonId = self.__ssc.findShiftedSeason(showId,
|
|
||||||
originalSeason=shiftedSeasonObj['original_season'],
|
|
||||||
firstEpisode=shiftedSeasonObj['first_episode'],
|
|
||||||
lastEpisode=shiftedSeasonObj['last_episode'])
|
|
||||||
if shiftedSeasonId is not None:
|
|
||||||
shiftedSeasonObj['id'] = shiftedSeasonId
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return shiftedSeasonObj
|
|
||||||
|
|
||||||
|
|
||||||
def action_add_pattern(self):
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
self.app.push_screen(PatternDetailsScreen(showId = self.__showDescriptor.getId()), self.handle_add_pattern)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_add_pattern(self, screenResult):
|
|
||||||
|
|
||||||
pattern = (screenResult['pattern'],)
|
|
||||||
self.patternTable.add_row(*map(str, pattern))
|
|
||||||
|
|
||||||
|
|
||||||
def action_edit_pattern(self):
|
|
||||||
|
|
||||||
selectedPatternDescriptor = self.getSelectedPatternDescriptor()
|
|
||||||
|
|
||||||
if selectedPatternDescriptor:
|
|
||||||
|
|
||||||
selectedPatternId = self.__pc.findPattern(selectedPatternDescriptor)
|
|
||||||
|
|
||||||
if selectedPatternId is None:
|
|
||||||
raise click.ClickException(f"ShowDetailsScreen.action_edit_pattern(): Pattern to edit has no id")
|
|
||||||
|
|
||||||
self.app.push_screen(PatternDetailsScreen(patternId = selectedPatternId, showId = self.__showDescriptor.getId()), self.handle_edit_pattern) # <-
|
|
||||||
|
|
||||||
|
|
||||||
def handle_edit_pattern(self, screenResult):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
row_key, col_key = self.patternTable.coordinate_to_cell_key(self.patternTable.cursor_coordinate)
|
|
||||||
self.patternTable.update_cell(row_key, self.column_key_pattern, screenResult['pattern'])
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def action_remove_pattern(self):
|
|
||||||
|
|
||||||
selectedPatternDescriptor = self.getSelectedPatternDescriptor()
|
|
||||||
|
|
||||||
if selectedPatternDescriptor:
|
|
||||||
|
|
||||||
selectedPatternId = self.__pc.findPattern(selectedPatternDescriptor)
|
|
||||||
|
|
||||||
if selectedPatternId is None:
|
|
||||||
raise click.ClickException(f"ShowDetailsScreen.action_remove_pattern(): Pattern to remove has no id")
|
|
||||||
|
|
||||||
self.app.push_screen(PatternDeleteScreen(patternId = selectedPatternId, showId = self.__showDescriptor.getId()), self.handle_remove_pattern)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_remove_pattern(self, pattern):
|
|
||||||
|
|
||||||
try:
|
|
||||||
row_key, col_key = self.patternTable.coordinate_to_cell_key(self.patternTable.cursor_coordinate)
|
|
||||||
self.patternTable.remove_row(row_key)
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
# Create the DataTable widget
|
|
||||||
self.patternTable = DataTable(classes="five")
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_pattern = self.patternTable.add_column("Pattern", width=150)
|
|
||||||
|
|
||||||
self.patternTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
self.shiftedSeasonsTable = DataTable(classes="five")
|
|
||||||
|
|
||||||
self.column_key_original_season = self.shiftedSeasonsTable.add_column("Original Season", width=30)
|
|
||||||
self.column_key_first_episode = self.shiftedSeasonsTable.add_column("First Episode", width=30)
|
|
||||||
self.column_key_last_episode = self.shiftedSeasonsTable.add_column("Last Episode", width=30)
|
|
||||||
self.column_key_season_offset = self.shiftedSeasonsTable.add_column("Season Offset", width=30)
|
|
||||||
self.column_key_episode_offset = self.shiftedSeasonsTable.add_column("Episode Offset", width=30)
|
|
||||||
|
|
||||||
self.shiftedSeasonsTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
# 1
|
|
||||||
yield Static("Show" if not self.__showDescriptor is None else "New Show", id="toplabel")
|
|
||||||
yield Button("Identify", id="identify_button")
|
|
||||||
yield Static(" ", classes="three")
|
|
||||||
|
|
||||||
# 2
|
|
||||||
yield Static("ID")
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
yield Static("", id="id_static", classes="four")
|
|
||||||
else:
|
|
||||||
yield Input(type="integer", id="id_input", classes="four")
|
|
||||||
|
|
||||||
# 3
|
|
||||||
yield Static("Name")
|
|
||||||
yield Input(type="text", id="name_input", classes="four")
|
|
||||||
|
|
||||||
# 4
|
|
||||||
yield Static("Year")
|
|
||||||
yield Input(type="integer", id="year_input", classes="four")
|
|
||||||
|
|
||||||
#5
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
#6
|
|
||||||
yield Static("Index Season Digits")
|
|
||||||
yield Input(type="integer", id="index_season_digits_input", classes="four")
|
|
||||||
|
|
||||||
#7
|
|
||||||
yield Static("Index Episode Digits")
|
|
||||||
yield Input(type="integer", id="index_episode_digits_input", classes="four")
|
|
||||||
|
|
||||||
#8
|
|
||||||
yield Static("Indicator Season Digits")
|
|
||||||
yield Input(type="integer", id="indicator_season_digits_input", classes="four")
|
|
||||||
|
|
||||||
#9
|
|
||||||
yield Static("Indicator Edisode Digits")
|
|
||||||
yield Input(type="integer", id="indicator_episode_digits_input", classes="four")
|
|
||||||
|
|
||||||
# 10
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 11
|
|
||||||
yield Static("Shifted seasons", classes="two")
|
|
||||||
|
|
||||||
if self.__showDescriptor is not None:
|
|
||||||
yield Button("Add", id="button_add_shifted_season")
|
|
||||||
yield Button("Edit", id="button_edit_shifted_season")
|
|
||||||
yield Button("Delete", id="button_delete_shifted_season")
|
|
||||||
else:
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
# 12
|
|
||||||
yield self.shiftedSeasonsTable
|
|
||||||
|
|
||||||
# 13
|
|
||||||
yield Static("File patterns", classes="five")
|
|
||||||
# 14
|
|
||||||
yield self.patternTable
|
|
||||||
|
|
||||||
# 15
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 16
|
|
||||||
yield Button("Save", id="save_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
def getShowDescriptorFromInput(self) -> ShowDescriptor:
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if self.__showDescriptor:
|
|
||||||
kwargs[ShowDescriptor.ID_KEY] = int(self.__showDescriptor.getId())
|
|
||||||
else:
|
|
||||||
kwargs[ShowDescriptor.ID_KEY] = int(self.query_one("#id_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.NAME_KEY] = str(self.query_one("#name_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.YEAR_KEY] = int(self.query_one("#year_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.INDEX_SEASON_DIGITS_KEY] = int(self.query_one("#index_season_digits_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY] = int(self.query_one("#index_episode_digits_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY] = int(self.query_one("#indicator_season_digits_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY] = int(self.query_one("#indicator_episode_digits_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return ShowDescriptor(**kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "save_button":
|
|
||||||
|
|
||||||
showDescriptor = self.getShowDescriptorFromInput()
|
|
||||||
|
|
||||||
if not showDescriptor is None:
|
|
||||||
if self.__sc.updateShow(showDescriptor):
|
|
||||||
self.dismiss(showDescriptor)
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "identify_button":
|
|
||||||
|
|
||||||
showDescriptor = self.getShowDescriptorFromInput()
|
|
||||||
if not showDescriptor is None:
|
|
||||||
showName, showYear = self.__tc.getShowNameAndYear(showDescriptor.getId())
|
|
||||||
|
|
||||||
self.query_one("#name_input", Input).value = filterFilename(showName)
|
|
||||||
self.query_one("#year_input", Input).value = str(showYear)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "button_add_shifted_season":
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
self.app.push_screen(ShiftedSeasonDetailsScreen(showId = self.__showDescriptor.getId()), self.handle_update_shifted_season)
|
|
||||||
|
|
||||||
if event.button.id == "button_edit_shifted_season":
|
|
||||||
selectedShiftedSeasonObj = self.getSelectedShiftedSeasonObjFromInput()
|
|
||||||
if 'id' in selectedShiftedSeasonObj.keys():
|
|
||||||
self.app.push_screen(ShiftedSeasonDetailsScreen(showId = self.__showDescriptor.getId(), shiftedSeasonId=selectedShiftedSeasonObj['id']), self.handle_update_shifted_season)
|
|
||||||
|
|
||||||
if event.button.id == "button_delete_shifted_season":
|
|
||||||
selectedShiftedSeasonObj = self.getSelectedShiftedSeasonObjFromInput()
|
|
||||||
if 'id' in selectedShiftedSeasonObj.keys():
|
|
||||||
self.app.push_screen(ShiftedSeasonDeleteScreen(showId = self.__showDescriptor.getId(), shiftedSeasonId=selectedShiftedSeasonObj['id']), self.handle_delete_shifted_season)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_update_shifted_season(self, screenResult):
|
|
||||||
self.updateShiftedSeasons()
|
|
||||||
|
|
||||||
def handle_delete_shifted_season(self, screenResult):
|
|
||||||
self.updateShiftedSeasons()
|
|
||||||
@@ -1,168 +0,0 @@
|
|||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, DataTable
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from .show_controller import ShowController
|
|
||||||
|
|
||||||
from .show_details_screen import ShowDetailsScreen
|
|
||||||
from .show_delete_screen import ShowDeleteScreen
|
|
||||||
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
from textual.widgets._data_table import CellDoesNotExist
|
|
||||||
|
|
||||||
|
|
||||||
class ShowsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 1;
|
|
||||||
grid-rows: 2 auto;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#top {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
#two {
|
|
||||||
column-span: 2;
|
|
||||||
row-span: 2;
|
|
||||||
tint: magenta 40%;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
BINDINGS = [
|
|
||||||
("e", "edit_show", "Edit Show"),
|
|
||||||
("n", "new_show", "New Show"),
|
|
||||||
("d", "delete_show", "Delete Show"),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__sc = ShowController(context = self.context)
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedShowId(self):
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.table.coordinate_to_cell_key(self.table.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_row_data = self.table.get_row(row_key)
|
|
||||||
|
|
||||||
return selected_row_data[0]
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def action_new_show(self):
|
|
||||||
self.app.push_screen(ShowDetailsScreen(), self.handle_new_screen)
|
|
||||||
|
|
||||||
def handle_new_screen(self, screenResult):
|
|
||||||
|
|
||||||
show = (screenResult['id'], screenResult['name'], screenResult['year'])
|
|
||||||
self.table.add_row(*map(str, show))
|
|
||||||
|
|
||||||
|
|
||||||
def action_edit_show(self):
|
|
||||||
|
|
||||||
selectedShowId = self.getSelectedShowId()
|
|
||||||
|
|
||||||
if selectedShowId is not None:
|
|
||||||
self.app.push_screen(ShowDetailsScreen(showId = selectedShowId), self.handle_edit_screen)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_edit_screen(self, showDescriptor: ShowDescriptor):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
row_key, col_key = self.table.coordinate_to_cell_key(self.table.cursor_coordinate)
|
|
||||||
|
|
||||||
self.table.update_cell(row_key, self.column_key_name, showDescriptor.getName())
|
|
||||||
self.table.update_cell(row_key, self.column_key_year, showDescriptor.getYear())
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def action_delete_show(self):
|
|
||||||
|
|
||||||
selectedShowId = self.getSelectedShowId()
|
|
||||||
|
|
||||||
if selectedShowId is not None:
|
|
||||||
self.app.push_screen(ShowDeleteScreen(showId = selectedShowId), self.handle_delete_show)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_delete_show(self, showDescriptor: ShowDescriptor):
|
|
||||||
|
|
||||||
try:
|
|
||||||
row_key, col_key = self.table.coordinate_to_cell_key(self.table.cursor_coordinate)
|
|
||||||
self.table.remove_row(row_key)
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self) -> None:
|
|
||||||
for show in self.__sc.getAllShows():
|
|
||||||
row = (int(show.id), show.name, show.year) # Convert each element to a string before adding
|
|
||||||
self.table.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
# Create the DataTable widget
|
|
||||||
self.table = DataTable()
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_id = self.table.add_column("ID", width=10)
|
|
||||||
self.column_key_name = self.table.add_column("Name", width=50)
|
|
||||||
self.column_key_year = self.table.add_column("Year", width=10)
|
|
||||||
|
|
||||||
self.table.cursor_type = 'row'
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
yield Static("Shows")
|
|
||||||
|
|
||||||
yield self.table
|
|
||||||
|
|
||||||
f = Footer()
|
|
||||||
f.description = "yolo"
|
|
||||||
|
|
||||||
yield f
|
|
||||||
@@ -1,220 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.model.track import Track
|
|
||||||
|
|
||||||
from ffx.model.media_tag import MediaTag
|
|
||||||
from ffx.model.track_tag import TrackTag
|
|
||||||
|
|
||||||
|
|
||||||
class TagController():
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
|
|
||||||
def updateMediaTag(self, patternId, tagKey, tagValue):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(MediaTag).filter(MediaTag.pattern_id == int(patternId),
|
|
||||||
MediaTag.key == str(tagKey))
|
|
||||||
tag = q.first()
|
|
||||||
if tag:
|
|
||||||
tag.value = str(tagValue)
|
|
||||||
else:
|
|
||||||
tag = MediaTag(pattern_id = int(patternId),
|
|
||||||
key = str(tagKey),
|
|
||||||
value = str(tagValue))
|
|
||||||
s.add(tag)
|
|
||||||
s.commit()
|
|
||||||
|
|
||||||
return int(tag.id)
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.updateTrackTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def updateTrackTag(self, trackId, tagKey, tagValue):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(TrackTag).filter(TrackTag.track_id == int(trackId),
|
|
||||||
TrackTag.key == str(tagKey))
|
|
||||||
tag = q.first()
|
|
||||||
if tag:
|
|
||||||
tag.value = str(tagValue)
|
|
||||||
else:
|
|
||||||
tag = TrackTag(track_id = int(trackId),
|
|
||||||
key = str(tagKey),
|
|
||||||
value = str(tagValue))
|
|
||||||
s.add(tag)
|
|
||||||
s.commit()
|
|
||||||
|
|
||||||
return int(tag.id)
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.updateTrackTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def deleteMediaTagByKey(self, patternId, tagKey):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(MediaTag).filter(MediaTag.pattern_id == int(patternId),
|
|
||||||
MediaTag.key == str(tagKey))
|
|
||||||
if q.count():
|
|
||||||
tag = q.first()
|
|
||||||
s.delete(tag)
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.deleteMediaTagByKey(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def deleteTrackTagByKey(self, trackId, tagKey):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(TrackTag).filter(TrackTag.track_id == int(trackId),
|
|
||||||
TrackTag.key == str(tagKey))
|
|
||||||
tag = q.first()
|
|
||||||
if tag:
|
|
||||||
s.delete(tag)
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.deleteTrackTagByKey(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def findAllMediaTags(self, patternId) -> dict:
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(MediaTag).filter(MediaTag.pattern_id == int(patternId))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
return {t.key:t.value for t in q.all()}
|
|
||||||
else:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.findAllMediaTags(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def findAllTrackTags(self, trackId) -> dict:
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(TrackTag).filter(TrackTag.track_id == int(trackId))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
return {t.key:t.value for t in q.all()}
|
|
||||||
else:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.findAllTracks(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def findMediaTag(self, trackId : int, trackKey : str) -> MediaTag:
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Track).filter(MediaTag.track_id == int(trackId), MediaTag.key == str(trackKey))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
return q.first()
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.findMediaTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def findTrackTag(self, trackId : int, tagKey : str) -> TrackTag:
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(TrackTag).filter(TrackTag.track_id == int(trackId), TrackTag.key == str(tagKey))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
return q.first()
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.findTrackTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def deleteMediaTag(self, tagId) -> bool:
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(MediaTag).filter(MediaTag.id == int(tagId))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
|
|
||||||
tag = q.first()
|
|
||||||
|
|
||||||
s.delete(tag)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.deleteMediaTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def deleteTrackTag(self, tagId : int) -> bool:
|
|
||||||
|
|
||||||
if type(tagId) is not int:
|
|
||||||
raise TypeError('TagController.deleteTrackTag(): Argument tagId is required to be of type int')
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(TrackTag).filter(TrackTag.id == int(tagId))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
|
|
||||||
tag = q.first()
|
|
||||||
|
|
||||||
s.delete(tag)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.deleteTrackTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
@@ -1,98 +0,0 @@
|
|||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class TagDeleteScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 4 9;
|
|
||||||
grid-rows: 2 2 2 2 2 2 2 2 2;
|
|
||||||
grid-columns: 30 30 30 30;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, key=None, value=None):
|
|
||||||
super().__init__()
|
|
||||||
self.__key = key
|
|
||||||
self.__value = value
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
self.query_one("#keylabel", Static).update(str(self.__key))
|
|
||||||
self.query_one("#valuelabel", Static).update(str(self.__value))
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
#1
|
|
||||||
yield Static(f"Are you sure to delete this tag ?", id="toplabel", classes="five")
|
|
||||||
|
|
||||||
#2
|
|
||||||
yield Static("Key")
|
|
||||||
yield Static(" ", id="keylabel", classes="four")
|
|
||||||
|
|
||||||
#3
|
|
||||||
yield Static("Value")
|
|
||||||
yield Static(" ", id="valuelabel", classes="four")
|
|
||||||
|
|
||||||
#4
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
#9
|
|
||||||
yield Button("Delete", id="delete_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "delete_button":
|
|
||||||
|
|
||||||
tag = (self.__key, self.__value)
|
|
||||||
self.dismiss(tag)
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
@@ -1,132 +0,0 @@
|
|||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, Input
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class TagDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 5 20;
|
|
||||||
grid-rows: 2 2 2 2 2 3 2 2 2 2 2 6 2 2 6 2 2 2 2 6;
|
|
||||||
grid-columns: 25 25 25 25 225;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
SelectionList {
|
|
||||||
border: none;
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
Select {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, key=None, value=None):
|
|
||||||
super().__init__()
|
|
||||||
self.__key = key
|
|
||||||
self.__value = value
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
if self.__key is not None:
|
|
||||||
self.query_one("#key_input", Input).value = str(self.__key)
|
|
||||||
|
|
||||||
if self.__value is not None:
|
|
||||||
self.query_one("#value_input", Input).value = str(self.__value)
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
# 8
|
|
||||||
yield Static("Key")
|
|
||||||
yield Input(id="key_input", classes="four")
|
|
||||||
|
|
||||||
yield Static("Value")
|
|
||||||
yield Input(id="value_input", classes="four")
|
|
||||||
|
|
||||||
# 17
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 18
|
|
||||||
yield Button("Save", id="save_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
# 19
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 20
|
|
||||||
yield Static(" ", classes="five", id="messagestatic")
|
|
||||||
|
|
||||||
yield Footer(id="footer")
|
|
||||||
|
|
||||||
|
|
||||||
def getTagFromInput(self):
|
|
||||||
|
|
||||||
tagKey = self.query_one("#key_input", Input).value
|
|
||||||
tagValue = self.query_one("#value_input", Input).value
|
|
||||||
|
|
||||||
return (tagKey, tagValue)
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
# Check if the button pressed is the one we are interested in
|
|
||||||
if event.button.id == "save_button":
|
|
||||||
self.dismiss(self.getTagFromInput())
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
@@ -1,34 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
class DispositionCombinator3():
|
|
||||||
|
|
||||||
IDENTIFIER = 'disposition3'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getIdentifier(self):
|
|
||||||
return DispositionCombinator3.IDENTIFIER
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
return [os.path.basename(p)[25:-3]
|
|
||||||
for p
|
|
||||||
in glob.glob(f"{ basePath }/disposition_combinator_3_*.py", recursive = True)
|
|
||||||
if p != __file__]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(identifier):
|
|
||||||
importlib.import_module(f"ffx.test.disposition_combinator_3_{ identifier }")
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.disposition_combinator_3_{ identifier }"]):
|
|
||||||
#HINT: Excluding DispositionCombination as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'DispositionCombinator3' and name.startswith('DispositionCombinator3'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [DispositionCombinator3.getClassReference(i) for i in DispositionCombinator3.list()]
|
|
||||||
@@ -1,279 +0,0 @@
|
|||||||
import os, math, tempfile, click
|
|
||||||
|
|
||||||
|
|
||||||
from ffx.ffx_controller import FfxController
|
|
||||||
|
|
||||||
from ffx.process import executeProcess
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.helper import dictCache
|
|
||||||
from ffx.configuration_controller import ConfigurationController
|
|
||||||
|
|
||||||
|
|
||||||
SHORT_SUBTITLE_SEQUENCE = [{'start': 1, 'end': 2, 'text': 'yolo'},
|
|
||||||
{'start': 3, 'end': 4, 'text': 'zolo'},
|
|
||||||
{'start': 5, 'end': 6, 'text': 'golo'}]
|
|
||||||
|
|
||||||
def getTimeString(hours: float = 0.0,
|
|
||||||
minutes: float = 0.0,
|
|
||||||
seconds: float = 0.0,
|
|
||||||
millis: float = 0.0,
|
|
||||||
format: str = ''):
|
|
||||||
|
|
||||||
duration = (hours * 3600.0
|
|
||||||
+ minutes * 60.0
|
|
||||||
+ seconds
|
|
||||||
+ millis / 1000.0)
|
|
||||||
|
|
||||||
hours = math.floor(duration / 3600.0)
|
|
||||||
remaining = duration - 3600.0 * hours
|
|
||||||
|
|
||||||
minutes = math.floor(remaining / 60.0)
|
|
||||||
remaining = remaining - 60.0 * minutes
|
|
||||||
|
|
||||||
seconds = math.floor(remaining)
|
|
||||||
remaining = remaining - seconds
|
|
||||||
|
|
||||||
millis = math.floor(remaining * 1000)
|
|
||||||
|
|
||||||
if format == 'ass':
|
|
||||||
return f"{hours:01d}:{minutes:02d}:{seconds:02d}.{millis:02d}"
|
|
||||||
|
|
||||||
# srt & vtt
|
|
||||||
return f"{hours:02d}:{minutes:02d}:{seconds:02d}.{millis:03d}"
|
|
||||||
|
|
||||||
|
|
||||||
def createAssFile(entries: dict, directory = None):
|
|
||||||
|
|
||||||
# [Script Info]
|
|
||||||
# ; Script generated by FFmpeg/Lavc61.3.100
|
|
||||||
# ScriptType: v4.00+
|
|
||||||
# PlayResX: 384
|
|
||||||
# PlayResY: 288
|
|
||||||
# ScaledBorderAndShadow: yes
|
|
||||||
# YCbCr Matrix: None
|
|
||||||
#
|
|
||||||
# [V4+ Styles]
|
|
||||||
# Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding
|
|
||||||
# Style: Default,Arial,16,&Hffffff,&Hffffff,&H0,&H0,0,0,0,0,100,100,0,0,1,1,0,2,10,10,10,1
|
|
||||||
#
|
|
||||||
# [Events]
|
|
||||||
# Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
|
|
||||||
# Dialogue: 0,0:00:01.00,0:00:02.00,Default,,0,0,0,,yolo
|
|
||||||
# Dialogue: 0,0:00:03.00,0:00:04.00,Default,,0,0,0,,zolo
|
|
||||||
# Dialogue: 0,0:00:05.00,0:00:06.00,Default,,0,0,0,,golo
|
|
||||||
tmpFileName = tempfile.mktemp(suffix=".ass", dir = directory)
|
|
||||||
|
|
||||||
with open(tmpFileName, 'w') as tmpFile:
|
|
||||||
|
|
||||||
tmpFile.write("[Script Info]\n")
|
|
||||||
tmpFile.write("; Script generated by Ffx\n")
|
|
||||||
tmpFile.write("ScriptType: v4.00+\n")
|
|
||||||
tmpFile.write("PlayResX: 384\n")
|
|
||||||
tmpFile.write("PlayResY: 288\n")
|
|
||||||
tmpFile.write("ScaledBorderAndShadow: yes\n")
|
|
||||||
tmpFile.write("YCbCr Matrix: None\n")
|
|
||||||
tmpFile.write("\n")
|
|
||||||
tmpFile.write("[V4+ Styles]\n")
|
|
||||||
tmpFile.write("Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding\n")
|
|
||||||
tmpFile.write("Style: Default,Arial,16,&Hffffff,&Hffffff,&H0,&H0,0,0,0,0,100,100,0,0,1,1,0,2,10,10,10,1\n")
|
|
||||||
tmpFile.write("\n")
|
|
||||||
tmpFile.write("[Events]\n")
|
|
||||||
tmpFile.write("Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text\n")
|
|
||||||
|
|
||||||
for entryIndex in range(len(entries)):
|
|
||||||
tmpFile.write(f"Dialogue: 0,{getTimeString(seconds=entries[entryIndex]['start'], format='ass')},{getTimeString(seconds=entries[entryIndex]['end'], format='ass')},Default,,0,0,0,,{entries[entryIndex]['text']}\n")
|
|
||||||
|
|
||||||
return tmpFileName
|
|
||||||
|
|
||||||
def createSrtFile(entries: dict, directory = None):
|
|
||||||
# 1
|
|
||||||
# 00:00:00,000 --> 00:00:02,500
|
|
||||||
# Welcome to the Example Subtitle File!
|
|
||||||
#
|
|
||||||
# 2
|
|
||||||
# 00:00:03,000 --> 00:00:06,000
|
|
||||||
# This is a demonstration of SRT subtitles.
|
|
||||||
#
|
|
||||||
# 3
|
|
||||||
# 00:00:07,000 --> 00:00:10,500
|
|
||||||
# You can use SRT files to add subtitles to your videos.
|
|
||||||
|
|
||||||
tmpFileName = tempfile.mktemp(suffix=".srt", dir = directory)
|
|
||||||
|
|
||||||
with open(tmpFileName, 'w') as tmpFile:
|
|
||||||
|
|
||||||
for entryIndex in range(len(entries)):
|
|
||||||
|
|
||||||
tmpFile.write(f"{entryIndex}\n")
|
|
||||||
tmpFile.write(f"{getTimeString(seconds=entries[entryIndex]['start'])} --> {getTimeString(seconds=entries[entryIndex]['end'])}\n")
|
|
||||||
tmpFile.write(f"{entries[entryIndex]['text']}\n\n")
|
|
||||||
|
|
||||||
return tmpFileName
|
|
||||||
|
|
||||||
|
|
||||||
def createVttFile(entries: dict, directory = None):
|
|
||||||
# WEBVTT
|
|
||||||
#
|
|
||||||
# 01:20:33.050 --> 01:20:35.050
|
|
||||||
# Yolo
|
|
||||||
|
|
||||||
tmpFileName = tempfile.mktemp(suffix=".vtt", dir = directory)
|
|
||||||
|
|
||||||
with open(tmpFileName, 'w') as tmpFile:
|
|
||||||
|
|
||||||
tmpFile.write("WEBVTT\n")
|
|
||||||
|
|
||||||
for entryIndex in range(len(entries)):
|
|
||||||
|
|
||||||
tmpFile.write("\n")
|
|
||||||
tmpFile.write(f"{getTimeString(seconds=entries[entryIndex]['start'])} --> {getTimeString(seconds=entries[entryIndex]['end'])}\n")
|
|
||||||
tmpFile.write(f"{entries[entryIndex]['text']}\n")
|
|
||||||
|
|
||||||
|
|
||||||
return tmpFileName
|
|
||||||
|
|
||||||
|
|
||||||
def createMediaTestFile(mediaDescriptor: MediaDescriptor,
|
|
||||||
directory: str = '',
|
|
||||||
baseName: str = 'media',
|
|
||||||
format: str = '',
|
|
||||||
extension: str = 'mkv',
|
|
||||||
sizeX: int = 1280,
|
|
||||||
sizeY: int = 720,
|
|
||||||
rate: int = 25,
|
|
||||||
length: int = 10,
|
|
||||||
logger = None):
|
|
||||||
|
|
||||||
# subtitleFilePath = createVttFile(SHORT_SUBTITLE_SEQUENCE)
|
|
||||||
|
|
||||||
# commandTokens = FfxController.COMMAND_TOKENS
|
|
||||||
commandTokens = ['ffmpeg', '-y']
|
|
||||||
|
|
||||||
generatorCache = []
|
|
||||||
generatorTokens = []
|
|
||||||
mappingTokens = []
|
|
||||||
importTokens = []
|
|
||||||
metadataTokens = []
|
|
||||||
|
|
||||||
|
|
||||||
for mediaTagKey, mediaTagValue in mediaDescriptor.getTags().items():
|
|
||||||
metadataTokens += ['-metadata:g', f"{mediaTagKey}={mediaTagValue}"]
|
|
||||||
|
|
||||||
subIndexCounter = {}
|
|
||||||
|
|
||||||
# for trackDescriptor in mediaDescriptor.getAllTrackDescriptors():
|
|
||||||
for trackDescriptor in mediaDescriptor.getTrackDescriptors():
|
|
||||||
|
|
||||||
trackType = trackDescriptor.getType()
|
|
||||||
|
|
||||||
if trackType == TrackType.VIDEO:
|
|
||||||
|
|
||||||
cacheIndex, generatorCache = dictCache({'type': TrackType.VIDEO}, generatorCache)
|
|
||||||
# click.echo(f"createMediaTestFile() cache index={cacheIndex} size={len(generatorCache)}")
|
|
||||||
|
|
||||||
if cacheIndex == -1:
|
|
||||||
generatorTokens += ['-f',
|
|
||||||
'lavfi',
|
|
||||||
'-i',
|
|
||||||
f"color=size={sizeX}x{sizeY}:rate={rate}:color=black"]
|
|
||||||
|
|
||||||
sourceIndex = len(generatorCache) - 1 if cacheIndex == -1 else cacheIndex
|
|
||||||
mappingTokens += ['-map', f"{sourceIndex}:v:0"]
|
|
||||||
|
|
||||||
if not trackType in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
for mediaTagKey, mediaTagValue in trackDescriptor.getTags().items():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackType.indicator()}:{subIndexCounter[trackType]}",
|
|
||||||
f"{mediaTagKey}={mediaTagValue}"]
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
if trackType == TrackType.AUDIO:
|
|
||||||
|
|
||||||
audioLayout = 'stereo'
|
|
||||||
|
|
||||||
cacheIndex, generatorCache = dictCache({'type': TrackType.AUDIO, 'layout': audioLayout}, generatorCache)
|
|
||||||
# click.echo(f"createMediaTestFile() cache index={cacheIndex} size={len(generatorCache)}")
|
|
||||||
|
|
||||||
# click.echo(f"generartorCache index={cacheIndex} len={len(generatorCache)}")
|
|
||||||
if cacheIndex == -1:
|
|
||||||
generatorTokens += ['-f',
|
|
||||||
'lavfi',
|
|
||||||
'-i',
|
|
||||||
f"anullsrc=channel_layout={audioLayout}:sample_rate=44100"]
|
|
||||||
|
|
||||||
sourceIndex = len(generatorCache) - 1 if cacheIndex == -1 else cacheIndex
|
|
||||||
mappingTokens += ['-map', f"{sourceIndex}:a:0"]
|
|
||||||
|
|
||||||
if not trackType in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
for mediaTagKey, mediaTagValue in trackDescriptor.getTags().items():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackType.indicator()}:{subIndexCounter[trackType]}",
|
|
||||||
f"{mediaTagKey}={mediaTagValue}"]
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
if trackType == TrackType.SUBTITLE:
|
|
||||||
|
|
||||||
cacheIndex, generatorCache = dictCache({'type': TrackType.SUBTITLE}, generatorCache)
|
|
||||||
# click.echo(f"createMediaTestFile() cache index={cacheIndex} size={len(generatorCache)}")
|
|
||||||
|
|
||||||
if cacheIndex == -1:
|
|
||||||
importTokens = ['-i', createVttFile(SHORT_SUBTITLE_SEQUENCE, directory=directory if directory else None)]
|
|
||||||
|
|
||||||
sourceIndex = len(generatorCache) - 1 if cacheIndex == -1 else cacheIndex
|
|
||||||
mappingTokens += ['-map', f"{sourceIndex}:s:0"]
|
|
||||||
|
|
||||||
if not trackType in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
for mediaTagKey, mediaTagValue in trackDescriptor.getTags().items():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackType.indicator()}:{subIndexCounter[trackType]}",
|
|
||||||
f"{mediaTagKey}={mediaTagValue}"]
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
#TODO: Optimize too many runs
|
|
||||||
ffxContext = {'config': ConfigurationController(), 'logger': logger}
|
|
||||||
fc = FfxController(ffxContext, mediaDescriptor)
|
|
||||||
|
|
||||||
commandTokens += (generatorTokens
|
|
||||||
+ importTokens
|
|
||||||
+ mappingTokens
|
|
||||||
+ metadataTokens
|
|
||||||
+ fc.generateDispositionTokens())
|
|
||||||
|
|
||||||
|
|
||||||
commandTokens += ['-t', str(length)]
|
|
||||||
|
|
||||||
if format:
|
|
||||||
commandTokens += ['-f', format]
|
|
||||||
|
|
||||||
fileName = f"{baseName}.{extension}"
|
|
||||||
|
|
||||||
if directory:
|
|
||||||
outputPath = os.path.join(directory, fileName)
|
|
||||||
else:
|
|
||||||
outputPath = fileName
|
|
||||||
|
|
||||||
commandTokens += [outputPath]
|
|
||||||
|
|
||||||
|
|
||||||
ctx = {'logger': logger}
|
|
||||||
|
|
||||||
out, err, rc = executeProcess(commandTokens, context = ctx)
|
|
||||||
|
|
||||||
if not logger is None:
|
|
||||||
if out:
|
|
||||||
logger.debug(f"createMediaTestFile(): Process output: {out}")
|
|
||||||
if rc:
|
|
||||||
logger.debug(f"createMediaTestFile(): Process returned ERROR {rc} ({err})")
|
|
||||||
|
|
||||||
|
|
||||||
return outputPath
|
|
||||||
|
|
||||||
|
|
||||||
def createEmptyDirectory():
|
|
||||||
return tempfile.mkdtemp()
|
|
||||||
|
|
||||||
def createEmptyFile(suffix=None):
|
|
||||||
return tempfile.mkstemp(suffix=suffix)
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
class LabelCombinator():
|
|
||||||
|
|
||||||
IDENTIFIER = 'label'
|
|
||||||
PREFIX = 'label_combinator_'
|
|
||||||
|
|
||||||
LABEL = 'ffx'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getIdentifier(self):
|
|
||||||
return LabelCombinator.IDENTIFIER
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
return [os.path.basename(p)[len(LabelCombinator.PREFIX):-3]
|
|
||||||
for p
|
|
||||||
in glob.glob(f"{ basePath }/{LabelCombinator.PREFIX}*.py", recursive = True)
|
|
||||||
if p != __file__]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(identifier):
|
|
||||||
importlib.import_module(f"ffx.test.{LabelCombinator.PREFIX}{ identifier }")
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.{LabelCombinator.PREFIX}{ identifier }"]):
|
|
||||||
#HINT: Excluding MediaCombinator as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'LabelCombinator' and name.startswith('LabelCombinator'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [LabelCombinator.getClassReference(i) for i in LabelCombinator.list()]
|
|
||||||
@@ -1,33 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
class MediaCombinator():
|
|
||||||
|
|
||||||
IDENTIFIER = 'media'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getIdentifier(self):
|
|
||||||
return MediaCombinator.IDENTIFIER
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
return [os.path.basename(p)[17:-3]
|
|
||||||
for p
|
|
||||||
in glob.glob(f"{ basePath }/media_combinator_*.py", recursive = True)
|
|
||||||
if p != __file__]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(identifier):
|
|
||||||
importlib.import_module(f"ffx.test.media_combinator_{ identifier }")
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.media_combinator_{ identifier }"]):
|
|
||||||
#HINT: Excluding MediaCombinator as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'MediaCombinator' and name.startswith('MediaCombinator'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [MediaCombinator.getClassReference(i) for i in MediaCombinator.list()]
|
|
||||||
@@ -1,166 +0,0 @@
|
|||||||
import os, sys, click
|
|
||||||
|
|
||||||
from .scenario import Scenario
|
|
||||||
|
|
||||||
from ffx.test.helper import createMediaTestFile
|
|
||||||
from ffx.process import executeProcess
|
|
||||||
|
|
||||||
from ffx.file_properties import FileProperties
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
|
|
||||||
from ffx.test.media_combinator import MediaCombinator
|
|
||||||
|
|
||||||
|
|
||||||
class Scenario2(Scenario):
|
|
||||||
"""Creating file VAa, h264/aac/aac
|
|
||||||
Converting to VaA, vp9/opus/opus
|
|
||||||
No tmdb, default parameters"""
|
|
||||||
|
|
||||||
TEST_FILE_EXTENSION = 'mkv'
|
|
||||||
EXPECTED_FILE_EXTENSION = 'webm'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
context['use_tmdb'] = False
|
|
||||||
context['use_pattern'] = False
|
|
||||||
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
def getScenario(self):
|
|
||||||
return self.__class__.__name__[8:]
|
|
||||||
|
|
||||||
|
|
||||||
def job(self, yieldObj: dict):
|
|
||||||
|
|
||||||
testContext = self._context.copy()
|
|
||||||
|
|
||||||
targetYieldObj = yieldObj['target']
|
|
||||||
# presetYieldObj = yieldObj['preset'] # not used here
|
|
||||||
|
|
||||||
identifier = targetYieldObj['identifier']
|
|
||||||
variantList = targetYieldObj['variants']
|
|
||||||
|
|
||||||
variantIdentifier = '-'.join(variantList)
|
|
||||||
variantLabel = f"{self.__class__.__name__} Variant {variantIdentifier}"
|
|
||||||
|
|
||||||
sourceMediaDescriptor: MediaDescriptor = targetYieldObj['payload']
|
|
||||||
|
|
||||||
assertSelectorList: list = targetYieldObj['assertSelectors']
|
|
||||||
assertFuncList = targetYieldObj['assertFuncs']
|
|
||||||
shouldFail = targetYieldObj['shouldFail']
|
|
||||||
|
|
||||||
|
|
||||||
if self._context['test_variant'] and not variantIdentifier.startswith(self._context['test_variant']):
|
|
||||||
return
|
|
||||||
|
|
||||||
if (self._context['test_limit'] and (self._context['test_passed_counter'] + self._context['test_failed_counter'])
|
|
||||||
>= self._context['test_limit']):
|
|
||||||
return
|
|
||||||
|
|
||||||
self._logger.debug(f"Running Job: {variantLabel}")
|
|
||||||
|
|
||||||
|
|
||||||
# Phase 1: Setup source files
|
|
||||||
|
|
||||||
self.clearTestDirectory()
|
|
||||||
mediaFilePath = createMediaTestFile(mediaDescriptor=sourceMediaDescriptor,
|
|
||||||
directory=self._testDirectory,
|
|
||||||
logger=self._logger,
|
|
||||||
length = 2)
|
|
||||||
|
|
||||||
|
|
||||||
# Phase 2: Run ffx
|
|
||||||
|
|
||||||
commandSequence = [sys.executable,
|
|
||||||
self._ffxExecutablePath]
|
|
||||||
|
|
||||||
if self._context['verbosity']:
|
|
||||||
commandSequence += ['--verbose',
|
|
||||||
str(self._context['verbosity'])]
|
|
||||||
|
|
||||||
commandSequence += ['convert',
|
|
||||||
mediaFilePath,
|
|
||||||
'--no-prompt',
|
|
||||||
'--no-signature']
|
|
||||||
|
|
||||||
out, err, rc = executeProcess(commandSequence, directory = self._testDirectory, context = self._context)
|
|
||||||
|
|
||||||
if out and self._context['verbosity'] >= 9:
|
|
||||||
self._logger.debug(f"{variantLabel}: Process output: {out}")
|
|
||||||
if rc:
|
|
||||||
self._logger.debug(f"{variantLabel}: Process returned ERROR {rc} ({err})")
|
|
||||||
|
|
||||||
|
|
||||||
# Phase 3: Evaluate results
|
|
||||||
|
|
||||||
resultFilenames = [rf for rf in self.getFilenamesInTestDirectory() if rf.endswith(f".{Scenario2.EXPECTED_FILE_EXTENSION}")]
|
|
||||||
|
|
||||||
self._logger.debug(f"{variantLabel}: Result filenames: {resultFilenames}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
jobFailed = bool(rc)
|
|
||||||
self._logger.debug(f"{variantLabel}: Should fail: {shouldFail} / actually failed: {jobFailed}")
|
|
||||||
|
|
||||||
assert (jobFailed == shouldFail
|
|
||||||
), f"Process {'failed' if jobFailed else 'did not fail'}"
|
|
||||||
|
|
||||||
|
|
||||||
if not jobFailed:
|
|
||||||
|
|
||||||
resultFile = os.path.join(self._testDirectory, 'media.webm')
|
|
||||||
|
|
||||||
assert (os.path.isfile(resultFile)
|
|
||||||
), f"Result file 'media.webm' in path '{self._testDirectory}' wasn't created"
|
|
||||||
|
|
||||||
resultFileProperties = FileProperties(testContext, resultFile)
|
|
||||||
resultMediaDescriptor = resultFileProperties.getMediaDescriptor()
|
|
||||||
|
|
||||||
# resultMediaTracks = resultMediaDescriptor.getAllTrackDescriptors()
|
|
||||||
resultMediaTracks = resultMediaDescriptor.getTrackDescriptors()
|
|
||||||
|
|
||||||
for assertIndex in range(len(assertSelectorList)):
|
|
||||||
|
|
||||||
assertSelector = assertSelectorList[assertIndex]
|
|
||||||
assertFunc = assertFuncList[assertIndex]
|
|
||||||
assertVariant = variantList[assertIndex]
|
|
||||||
|
|
||||||
if assertSelector == 'M':
|
|
||||||
assertFunc()
|
|
||||||
for variantIndex in range(len(assertVariant)):
|
|
||||||
assert (assertVariant[variantIndex].lower() == resultMediaTracks[variantIndex].getType().indicator()
|
|
||||||
), f"Stream #{variantIndex} is not of type {resultMediaTracks[variantIndex].getType().label()}"
|
|
||||||
|
|
||||||
elif assertSelector == 'AD' or assertSelector == 'AT':
|
|
||||||
assertFunc({'tracks': resultMediaDescriptor.getAudioTracks()})
|
|
||||||
|
|
||||||
elif assertSelector == 'SD' or assertSelector == 'ST':
|
|
||||||
assertFunc({'tracks': resultMediaDescriptor.getSubtitleTracks()})
|
|
||||||
|
|
||||||
elif type(assertSelector) is str:
|
|
||||||
if assertSelector == 'J':
|
|
||||||
assertFunc()
|
|
||||||
|
|
||||||
|
|
||||||
self._context['test_passed_counter'] += 1
|
|
||||||
self._reportLogger.info(f"{variantLabel}: Test passed")
|
|
||||||
|
|
||||||
except AssertionError as ae:
|
|
||||||
|
|
||||||
self._context['test_failed_counter'] += 1
|
|
||||||
self._reportLogger.error(f"{variantLabel}: Test FAILED ({ae})")
|
|
||||||
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
MC_list = MediaCombinator.getAllClassReferences()
|
|
||||||
for MC in MC_list:
|
|
||||||
self._logger.debug(f"MC={MC.__name__}")
|
|
||||||
mc = MC(context = self._context)
|
|
||||||
for y in mc.getYield():
|
|
||||||
self.job(y)
|
|
||||||
@@ -1,33 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
class TrackTagCombinator2():
|
|
||||||
|
|
||||||
IDENTIFIER = 'trackTag2'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getIdentifier(self):
|
|
||||||
return TrackTagCombinator2.IDENTIFIER
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
return [os.path.basename(p)[23:-3]
|
|
||||||
for p
|
|
||||||
in glob.glob(f"{ basePath }/track_tag_combinator_2_*.py", recursive = True)
|
|
||||||
if p != __file__]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(identifier):
|
|
||||||
importlib.import_module(f"ffx.test.track_tag_combinator_2_{ identifier }")
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.track_tag_combinator_2_{ identifier }"]):
|
|
||||||
#HINT: Excluding DispositionCombination as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'TrackTagCombinator2' and name.startswith('TrackTagCombinator2'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [TrackTagCombinator2.getClassReference(i) for i in TrackTagCombinator2.list()]
|
|
||||||
@@ -1,33 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
class TrackTagCombinator3():
|
|
||||||
|
|
||||||
IDENTIFIER = 'trackTag3'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getIdentifier(self):
|
|
||||||
return TrackTagCombinator3.IDENTIFIER
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
return [os.path.basename(p)[23:-3]
|
|
||||||
for p
|
|
||||||
in glob.glob(f"{ basePath }/track_tag_combinator_3_*.py", recursive = True)
|
|
||||||
if p != __file__]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(identifier):
|
|
||||||
importlib.import_module(f"ffx.test.track_tag_combinator_3_{ identifier }")
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.track_tag_combinator_3_{ identifier }"]):
|
|
||||||
#HINT: Excluding DispositionCombination as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'TrackTagCombinator3' and name.startswith('TrackTagCombinator3'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [TrackTagCombinator3.getClassReference(i) for i in TrackTagCombinator3.list()]
|
|
||||||
@@ -1,134 +0,0 @@
|
|||||||
import os, requests, time, logging
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
|
|
||||||
class TMDB_REQUEST_EXCEPTION(Exception):
|
|
||||||
def __init__(self, statusCode, statusMessage):
|
|
||||||
errorMessage = f"TMDB query failed with status code {statusCode}: {statusMessage}"
|
|
||||||
super().__init__(errorMessage)
|
|
||||||
|
|
||||||
class TMDB_API_KEY_NOT_PRESENT_EXCEPTION(Exception):
|
|
||||||
def __str__(self):
|
|
||||||
return 'TMDB api key is not available, please set environment variable TMDB_API_KEY'
|
|
||||||
|
|
||||||
class TMDB_EXCESSIVE_USAGE_EXCEPTION(Exception):
|
|
||||||
def __str__(self):
|
|
||||||
return 'Rate limit was triggered too often'
|
|
||||||
|
|
||||||
|
|
||||||
class TmdbController():
|
|
||||||
|
|
||||||
DEFAULT_LANGUAGE = 'de-DE'
|
|
||||||
|
|
||||||
RATE_LIMIT_WAIT_SECONDS = 10
|
|
||||||
RATE_LIMIT_RETRIES = 3
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self.__context = context
|
|
||||||
|
|
||||||
if context is None:
|
|
||||||
self.__logger = logging.getLogger('FFX')
|
|
||||||
self.__logger.addHandler(logging.NullHandler())
|
|
||||||
else:
|
|
||||||
self.__logger = context['logger']
|
|
||||||
|
|
||||||
self.__tmdbApiKey = os.environ.get('TMDB_API_KEY', None)
|
|
||||||
if self.__tmdbApiKey is None:
|
|
||||||
raise TMDB_API_KEY_NOT_PRESENT_EXCEPTION
|
|
||||||
|
|
||||||
self.tmdbLanguage = TmdbController.DEFAULT_LANGUAGE
|
|
||||||
|
|
||||||
|
|
||||||
def getTmdbRequest(self, tmdbUrl):
|
|
||||||
retries = TmdbController.RATE_LIMIT_RETRIES
|
|
||||||
while True:
|
|
||||||
response = requests.get(tmdbUrl)
|
|
||||||
if response.status_code == 429:
|
|
||||||
if not retries:
|
|
||||||
raise TMDB_EXCESSIVE_USAGE_EXCEPTION()
|
|
||||||
self.__logger.warning('TMDB Rate limit (status_code 429)')
|
|
||||||
time.sleep(TmdbController.RATE_LIMIT_WAIT_SECONDS)
|
|
||||||
retries -= 1
|
|
||||||
else:
|
|
||||||
jsonResult = response.json()
|
|
||||||
if ('success' in jsonResult.keys()
|
|
||||||
and not jsonResult['success']):
|
|
||||||
raise TMDB_REQUEST_EXCEPTION(jsonResult['status_code'], jsonResult['status_message'])
|
|
||||||
return jsonResult
|
|
||||||
|
|
||||||
|
|
||||||
def queryShow(self, showId):
|
|
||||||
"""
|
|
||||||
First level keys in the response object:
|
|
||||||
adult bool
|
|
||||||
backdrop_path str
|
|
||||||
created_by []
|
|
||||||
episode_run_time []
|
|
||||||
first_air_date str YYYY-MM-DD
|
|
||||||
genres []
|
|
||||||
homepage str
|
|
||||||
id int
|
|
||||||
in_production bool
|
|
||||||
languages []
|
|
||||||
last_air_date str YYYY-MM-DD
|
|
||||||
last_episode_to_air {}
|
|
||||||
name str
|
|
||||||
next_episode_to_air null
|
|
||||||
networks []
|
|
||||||
number_of_episodes int
|
|
||||||
number_of_seasons int
|
|
||||||
origin_country []
|
|
||||||
original_language str
|
|
||||||
original_name str
|
|
||||||
overview str
|
|
||||||
popularity float
|
|
||||||
poster_path str
|
|
||||||
production_companies []
|
|
||||||
production_countries []
|
|
||||||
seasons []
|
|
||||||
spoken_languages []
|
|
||||||
status str
|
|
||||||
tagline str
|
|
||||||
type str
|
|
||||||
vote_average float
|
|
||||||
vote_count int
|
|
||||||
"""
|
|
||||||
|
|
||||||
urlParams = f"?language={self.tmdbLanguage}&api_key={self.__tmdbApiKey}"
|
|
||||||
|
|
||||||
tmdbUrl = f"https://api.themoviedb.org/3/tv/{showId}{urlParams}"
|
|
||||||
|
|
||||||
return self.getTmdbRequest(tmdbUrl)
|
|
||||||
|
|
||||||
|
|
||||||
def getShowNameAndYear(self, showId: int):
|
|
||||||
|
|
||||||
showResult = self.queryShow(int(showId))
|
|
||||||
firstAirDate = datetime.strptime(showResult['first_air_date'], '%Y-%m-%d')
|
|
||||||
|
|
||||||
return str(showResult['name']), int(firstAirDate.year)
|
|
||||||
|
|
||||||
|
|
||||||
def queryEpisode(self, showId, season, episode):
|
|
||||||
"""
|
|
||||||
First level keys in the response object:
|
|
||||||
air_date str 'YYY-MM-DD'
|
|
||||||
crew []
|
|
||||||
episode_number int
|
|
||||||
guest_stars []
|
|
||||||
name str
|
|
||||||
overview str
|
|
||||||
id int
|
|
||||||
production_code
|
|
||||||
runtime int
|
|
||||||
season_number int
|
|
||||||
still_path str '/filename.jpg'
|
|
||||||
vote_average float
|
|
||||||
vote_count int
|
|
||||||
"""
|
|
||||||
|
|
||||||
urlParams = f"?language={self.tmdbLanguage}&api_key={self.__tmdbApiKey}"
|
|
||||||
|
|
||||||
tmdbUrl = f"https://api.themoviedb.org/3/tv/{showId}/season/{season}/episode/{episode}{urlParams}"
|
|
||||||
|
|
||||||
return self.getTmdbRequest(tmdbUrl)
|
|
||||||
@@ -1,59 +0,0 @@
|
|||||||
from enum import Enum
|
|
||||||
|
|
||||||
|
|
||||||
class TrackCodec(Enum):
|
|
||||||
|
|
||||||
H265 = {'identifier': 'hevc', 'format': 'h265', 'extension': 'h265' ,'label': 'H.265'}
|
|
||||||
H264 = {'identifier': 'h264', 'format': 'h264', 'extension': 'h264' ,'label': 'H.264'}
|
|
||||||
MPEG4 = {'identifier': 'mpeg4', 'format': 'm4v', 'extension': 'm4v' ,'label': 'MPEG-4'}
|
|
||||||
MPEG2 = {'identifier': 'mpeg2video', 'format': 'mpeg2video', 'extension': 'mpg' ,'label': 'MPEG-2'}
|
|
||||||
|
|
||||||
AAC = {'identifier': 'aac', 'format': None, 'extension': 'aac' , 'label': 'AAC'}
|
|
||||||
AC3 = {'identifier': 'ac3', 'format': 'ac3', 'extension': 'ac3' , 'label': 'AC3'}
|
|
||||||
EAC3 = {'identifier': 'eac3', 'format': 'eac3', 'extension': 'eac3' , 'label': 'EAC3'}
|
|
||||||
DTS = {'identifier': 'dts', 'format': 'dts', 'extension': 'dts' , 'label': 'DTS'}
|
|
||||||
MP3 = {'identifier': 'mp3', 'format': 'mp3', 'extension': 'mp3' , 'label': 'MP3'}
|
|
||||||
|
|
||||||
PCM_S24LE = {'identifier': 'pcm_s24le', 'format': 's32', 'extension': 'raw' , 'label': 'PCM_S24LE'}
|
|
||||||
|
|
||||||
SRT = {'identifier': 'subrip', 'format': 'srt', 'extension': 'srt' , 'label': 'SRT'}
|
|
||||||
ASS = {'identifier': 'ass', 'format': 'ass', 'extension': 'ass' , 'label': 'ASS'}
|
|
||||||
PGS = {'identifier': 'hdmv_pgs_subtitle', 'format': 'sup', 'extension': 'sup' , 'label': 'PGS'}
|
|
||||||
VOBSUB = {'identifier': 'dvd_subtitle', 'format': None, 'extension': 'mkv' , 'label': 'VobSub'}
|
|
||||||
|
|
||||||
PNG = {'identifier': 'png', 'format': None, 'extension': 'png' , 'label': 'PNG'}
|
|
||||||
|
|
||||||
UNKNOWN = {'identifier': 'unknown', 'format': None, 'extension': None, 'label': 'UNKNOWN'}
|
|
||||||
|
|
||||||
|
|
||||||
def identifier(self):
|
|
||||||
"""Returns the codec identifier"""
|
|
||||||
return str(self.value['identifier'])
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
"""Returns the codec as string"""
|
|
||||||
return str(self.value['label'])
|
|
||||||
|
|
||||||
def format(self):
|
|
||||||
"""Returns the codec """
|
|
||||||
return self.value['format']
|
|
||||||
|
|
||||||
def extension(self):
|
|
||||||
"""Returns the corresponding extension"""
|
|
||||||
return str(self.value['extension'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def identify(identifier: str):
|
|
||||||
clist = [c for c in TrackCodec if c.value['identifier'] == str(identifier)]
|
|
||||||
if clist:
|
|
||||||
return clist[0]
|
|
||||||
else:
|
|
||||||
return TrackCodec.UNKNOWN
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromLabel(label: str):
|
|
||||||
clist = [c for c in TrackCodec if c.value['identifier'] == str(label)]
|
|
||||||
if clist:
|
|
||||||
return clist[0]
|
|
||||||
else:
|
|
||||||
return TrackCodec.UNKNOWN
|
|
||||||
@@ -1,278 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.model.track import Track
|
|
||||||
|
|
||||||
from .track_type import TrackType
|
|
||||||
|
|
||||||
from .track_disposition import TrackDisposition
|
|
||||||
|
|
||||||
from .track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.model.track_tag import TrackTag
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
class TrackController():
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__configurationData = self.context['config'].getData()
|
|
||||||
|
|
||||||
metadataConfiguration = self.__configurationData['metadata'] if 'metadata' in self.__configurationData.keys() else {}
|
|
||||||
|
|
||||||
self.__signatureTags = metadataConfiguration['signature'] if 'signature' in metadataConfiguration.keys() else {}
|
|
||||||
self.__removeGlobalKeys = metadataConfiguration['remove'] if 'remove' in metadataConfiguration.keys() else []
|
|
||||||
self.__ignoreGlobalKeys = metadataConfiguration['ignore'] if 'ignore' in metadataConfiguration.keys() else []
|
|
||||||
self.__removeTrackKeys = (metadataConfiguration['streams']['remove']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'remove' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
self.__ignoreTrackKeys = (metadataConfiguration['streams']['ignore']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'ignore' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
|
|
||||||
|
|
||||||
def addTrack(self, trackDescriptor : TrackDescriptor, patternId = None):
|
|
||||||
|
|
||||||
# option to override pattern id in case track descriptor has not set it
|
|
||||||
patId = int(trackDescriptor.getPatternId() if patternId is None else patternId)
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
track = Track(pattern_id = patId,
|
|
||||||
track_type = int(trackDescriptor.getType().index()),
|
|
||||||
codec_name = str(trackDescriptor.getCodec().identifier()),
|
|
||||||
index = int(trackDescriptor.getIndex()),
|
|
||||||
source_index = int(trackDescriptor.getSourceIndex()),
|
|
||||||
disposition_flags = int(TrackDisposition.toFlags(trackDescriptor.getDispositionSet())),
|
|
||||||
audio_layout = trackDescriptor.getAudioLayout().index())
|
|
||||||
|
|
||||||
s.add(track)
|
|
||||||
s.commit()
|
|
||||||
|
|
||||||
for k,v in trackDescriptor.getTags().items():
|
|
||||||
|
|
||||||
# Filter tags that make no sense to preserve
|
|
||||||
if k not in self.__ignoreTrackKeys and k not in self.__removeTrackKeys:
|
|
||||||
tag = TrackTag(track_id = track.id,
|
|
||||||
key = k,
|
|
||||||
value = v)
|
|
||||||
s.add(tag)
|
|
||||||
s.commit()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.addTrack(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def updateTrack(self, trackId, trackDescriptor : TrackDescriptor):
|
|
||||||
|
|
||||||
if type(trackDescriptor) is not TrackDescriptor:
|
|
||||||
raise TypeError('TrackController.updateTrack(): Argument trackDescriptor is required to be of type TrackDescriptor')
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Track).filter(Track.id == int(trackId))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
|
|
||||||
track : Track = q.first()
|
|
||||||
|
|
||||||
track.index = int(trackDescriptor.getIndex())
|
|
||||||
|
|
||||||
track.track_type = int(trackDescriptor.getType().index())
|
|
||||||
track.codec_name = str(trackDescriptor.getCodec().identifier())
|
|
||||||
track.audio_layout = int(trackDescriptor.getAudioLayout().index())
|
|
||||||
|
|
||||||
track.disposition_flags = int(TrackDisposition.toFlags(trackDescriptor.getDispositionSet()))
|
|
||||||
|
|
||||||
descriptorTagKeys = trackDescriptor.getTags()
|
|
||||||
tagKeysInDescriptor = set(descriptorTagKeys.keys())
|
|
||||||
tagKeysInDb = {t.key for t in track.track_tags}
|
|
||||||
|
|
||||||
for k in tagKeysInDescriptor & tagKeysInDb: # to update
|
|
||||||
tags = [t for t in track.track_tags if t.key == k]
|
|
||||||
tags[0].value = descriptorTagKeys[k]
|
|
||||||
for k in tagKeysInDescriptor - tagKeysInDb: # to add
|
|
||||||
tag = TrackTag(track_id=track.id, key=k, value=descriptorTagKeys[k])
|
|
||||||
s.add(tag)
|
|
||||||
for k in tagKeysInDb - tagKeysInDescriptor: # to remove
|
|
||||||
tags = [t for t in track.track_tags if t.key == k]
|
|
||||||
s.delete(tags[0])
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.updateTrack(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def findTracks(self, patternId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(Track).filter(Track.pattern_id == int(patternId))
|
|
||||||
return sorted([t for t in q.all()], key=lambda d: d.getIndex())
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.findTracks(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def findSiblingDescriptors(self, patternId):
|
|
||||||
"""Finds all stored tracks related to a pattern, packs them in descriptors
|
|
||||||
and also setting sub indices and returns list of descriptors"""
|
|
||||||
|
|
||||||
siblingTracks = self.findTracks(patternId)
|
|
||||||
siblingDescriptors = []
|
|
||||||
|
|
||||||
subIndexCounter = {}
|
|
||||||
st: Track
|
|
||||||
for st in siblingTracks:
|
|
||||||
trackType = st.getType()
|
|
||||||
|
|
||||||
if not trackType in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
siblingDescriptors.append(st.getDescriptor(subIndex=subIndexCounter[trackType]))
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
return siblingDescriptors
|
|
||||||
|
|
||||||
|
|
||||||
#TODO: mit optionalem Parameter lösen ^
|
|
||||||
def findVideoTracks(self, patternId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(Track).filter(Track.pattern_id == int(patternId), Track.track_type == TrackType.VIDEO.index())
|
|
||||||
return [a for a in q.all()]
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.findVideoTracks(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def findAudioTracks(self, patternId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(Track).filter(Track.pattern_id == int(patternId), Track.track_type == TrackType.AUDIO.index())
|
|
||||||
return [a for a in q.all()]
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.findAudioTracks(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def findSubtitleTracks(self, patternId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(Track).filter(Track.pattern_id == int(patternId), Track.track_type == TrackType.SUBTITLE.index())
|
|
||||||
return [s for s in q.all()]
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.findSubtitleTracks(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def getTrack(self, patternId : int, index: int) -> Track:
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Track).filter(Track.pattern_id == int(patternId), Track.index == int(index))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
return q.first()
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.getTrack(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def setDispositionState(self, patternId: int, index: int, disposition : TrackDisposition, state : bool):
|
|
||||||
|
|
||||||
if type(patternId) is not int:
|
|
||||||
raise TypeError('TrackController.setTrackDisposition(): Argument patternId is required to be of type int')
|
|
||||||
if type(index) is not int:
|
|
||||||
raise TypeError('TrackController.setTrackDisposition(): Argument index is required to be of type int')
|
|
||||||
if type(disposition) is not TrackDisposition:
|
|
||||||
raise TypeError('TrackController.setTrackDisposition(): Argument disposition is required to be of type TrackDisposition')
|
|
||||||
if type(state) is not bool:
|
|
||||||
raise TypeError('TrackController.setTrackDisposition(): Argument state is required to be of type bool')
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(Track).filter(Track.pattern_id == patternId, Track.index == index)
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
|
|
||||||
track : Track = q.first()
|
|
||||||
|
|
||||||
if state:
|
|
||||||
track.setDisposition(disposition)
|
|
||||||
else:
|
|
||||||
track.resetDisposition(disposition)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.updateTrack(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def deleteTrack(self, trackId):
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(Track).filter(Track.id == int(trackId))
|
|
||||||
|
|
||||||
if q.count():
|
|
||||||
patternId = int(q.first().pattern_id)
|
|
||||||
|
|
||||||
q_siblings = s.query(Track).filter(Track.pattern_id == patternId).order_by(Track.index)
|
|
||||||
|
|
||||||
index = 0
|
|
||||||
for track in q_siblings.all():
|
|
||||||
|
|
||||||
if track.id == int(trackId):
|
|
||||||
s.delete(track)
|
|
||||||
else:
|
|
||||||
track.index = index
|
|
||||||
index += 1
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.deleteTrack(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
# def setDefaultSubTrack(self, trackType, subIndex):
|
|
||||||
# pass
|
|
||||||
#
|
|
||||||
# def setForcedSubTrack(self, trackType, subIndex):
|
|
||||||
# pass
|
|
||||||
@@ -1,136 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
from .track_controller import TrackController
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class TrackDeleteScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 4 9;
|
|
||||||
grid-rows: 2 2 2 2 2 2 2 2 2;
|
|
||||||
grid-columns: 30 30 30 30;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, trackDescriptor : TrackDescriptor):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
if type(trackDescriptor) is not TrackDescriptor:
|
|
||||||
raise click.ClickException('TrackDeleteScreen.init(): trackDescriptor is required to be of type TrackDescriptor')
|
|
||||||
|
|
||||||
self.__tc = TrackController(context = self.context)
|
|
||||||
|
|
||||||
self.__trackDescriptor = trackDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
self.query_one("#subindexlabel", Static).update(str(self.__trackDescriptor.getSubIndex()))
|
|
||||||
self.query_one("#patternlabel", Static).update(str(self.__trackDescriptor.getPatternId()))
|
|
||||||
self.query_one("#languagelabel", Static).update(str(self.__trackDescriptor.getLanguage().label()))
|
|
||||||
self.query_one("#titlelabel", Static).update(str(str(self.__trackDescriptor.getTitle())))
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
#1
|
|
||||||
yield Static(f"Are you sure to delete the following {self.__trackDescriptor.getType().label()} track?", id="toplabel", classes="four")
|
|
||||||
|
|
||||||
#2
|
|
||||||
yield Static("sub index")
|
|
||||||
yield Static(" ", id="subindexlabel", classes="three")
|
|
||||||
|
|
||||||
#3
|
|
||||||
yield Static("from pattern")
|
|
||||||
yield Static(" ", id="patternlabel", classes="three")
|
|
||||||
|
|
||||||
#4
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
#5
|
|
||||||
yield Static("Language")
|
|
||||||
yield Static(" ", id="languagelabel", classes="three")
|
|
||||||
|
|
||||||
#6
|
|
||||||
yield Static("Title")
|
|
||||||
yield Static(" ", id="titlelabel", classes="three")
|
|
||||||
|
|
||||||
#7
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
#8
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
#9
|
|
||||||
yield Button("Delete", id="delete_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "delete_button":
|
|
||||||
|
|
||||||
track = self.__tc.getTrack(self.__trackDescriptor.getPatternId(), self.__trackDescriptor.getIndex())
|
|
||||||
|
|
||||||
if track is None:
|
|
||||||
raise click.ClickException(f"Track is none: patternId={self.__trackDescriptor.getPatternId()} type={self.__trackDescriptor.getType()} subIndex={self.__trackDescriptor.getSubIndex()}")
|
|
||||||
|
|
||||||
if track is not None:
|
|
||||||
|
|
||||||
if self.__tc.deleteTrack(track.getId()):
|
|
||||||
self.dismiss(self.__trackDescriptor)
|
|
||||||
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
@@ -1,346 +0,0 @@
|
|||||||
import logging
|
|
||||||
from typing import Self
|
|
||||||
|
|
||||||
from .iso_language import IsoLanguage
|
|
||||||
from .track_type import TrackType
|
|
||||||
from .audio_layout import AudioLayout
|
|
||||||
from .track_disposition import TrackDisposition
|
|
||||||
from .track_codec import TrackCodec
|
|
||||||
|
|
||||||
# from .helper import dictDiff, setDiff
|
|
||||||
|
|
||||||
|
|
||||||
class TrackDescriptor:
|
|
||||||
|
|
||||||
CONTEXT_KEY = "context"
|
|
||||||
|
|
||||||
ID_KEY = "id"
|
|
||||||
INDEX_KEY = "index"
|
|
||||||
SOURCE_INDEX_KEY = "source_index"
|
|
||||||
SUB_INDEX_KEY = "sub_index"
|
|
||||||
PATTERN_ID_KEY = "pattern_id"
|
|
||||||
EXTERNAL_SOURCE_FILE_PATH_KEY = "external_source_file"
|
|
||||||
|
|
||||||
DISPOSITION_SET_KEY = "disposition_set"
|
|
||||||
TAGS_KEY = "tags"
|
|
||||||
|
|
||||||
TRACK_TYPE_KEY = "track_type"
|
|
||||||
CODEC_KEY = "codec_name"
|
|
||||||
AUDIO_LAYOUT_KEY = "audio_layout"
|
|
||||||
|
|
||||||
FFPROBE_INDEX_KEY = "index"
|
|
||||||
FFPROBE_DISPOSITION_KEY = "disposition"
|
|
||||||
FFPROBE_TAGS_KEY = "tags"
|
|
||||||
FFPROBE_CODEC_TYPE_KEY = "codec_type"
|
|
||||||
FFPROBE_CODEC_KEY = "codec_name"
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
if TrackDescriptor.CONTEXT_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.CONTEXT_KEY]) is not dict:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDescriptor.__init__(): Argument {TrackDescriptor.CONTEXT_KEY} is required to be of type dict"
|
|
||||||
)
|
|
||||||
self.__context = kwargs[TrackDescriptor.CONTEXT_KEY]
|
|
||||||
self.__logger = self.__context['logger']
|
|
||||||
else:
|
|
||||||
self.__context = {}
|
|
||||||
self.__logger = logging.getLogger('FFX')
|
|
||||||
self.__logger.addHandler(logging.NullHandler())
|
|
||||||
|
|
||||||
if TrackDescriptor.ID_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.ID_KEY]) is not int:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.ID_KEY} is required to be of type int"
|
|
||||||
)
|
|
||||||
self.__trackId = kwargs[TrackDescriptor.ID_KEY]
|
|
||||||
else:
|
|
||||||
self.__trackId = -1
|
|
||||||
|
|
||||||
if TrackDescriptor.PATTERN_ID_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.PATTERN_ID_KEY]) is not int:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.PATTERN_ID_KEY} is required to be of type int"
|
|
||||||
)
|
|
||||||
self.__patternId = kwargs[TrackDescriptor.PATTERN_ID_KEY]
|
|
||||||
else:
|
|
||||||
self.__patternId = -1
|
|
||||||
|
|
||||||
if TrackDescriptor.EXTERNAL_SOURCE_FILE_PATH_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.EXTERNAL_SOURCE_FILE_PATH_KEY]) is not str:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.EXTERNAL_SOURCE_FILE_PATH_KEY} is required to be of type str"
|
|
||||||
)
|
|
||||||
self.__externalSourceFilePath = kwargs[TrackDescriptor.EXTERNAL_SOURCE_FILE_PATH_KEY]
|
|
||||||
else:
|
|
||||||
self.__externalSourceFilePath = ''
|
|
||||||
|
|
||||||
if TrackDescriptor.INDEX_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.INDEX_KEY]) is not int:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.INDEX_KEY} is required to be of type int"
|
|
||||||
)
|
|
||||||
self.__index = kwargs[TrackDescriptor.INDEX_KEY]
|
|
||||||
else:
|
|
||||||
self.__index = -1
|
|
||||||
|
|
||||||
if (
|
|
||||||
TrackDescriptor.SOURCE_INDEX_KEY in kwargs.keys()
|
|
||||||
and type(kwargs[TrackDescriptor.SOURCE_INDEX_KEY]) is int
|
|
||||||
):
|
|
||||||
self.__sourceIndex = kwargs[TrackDescriptor.SOURCE_INDEX_KEY]
|
|
||||||
else:
|
|
||||||
self.__sourceIndex = self.__index
|
|
||||||
|
|
||||||
if TrackDescriptor.SUB_INDEX_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.SUB_INDEX_KEY]) is not int:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.SUB_INDEX_KEY} is required to be of type int"
|
|
||||||
)
|
|
||||||
self.__subIndex = kwargs[TrackDescriptor.SUB_INDEX_KEY]
|
|
||||||
else:
|
|
||||||
self.__subIndex = -1
|
|
||||||
|
|
||||||
if TrackDescriptor.TRACK_TYPE_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.TRACK_TYPE_KEY]) is not TrackType:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.TRACK_TYPE_KEY} is required to be of type TrackType"
|
|
||||||
)
|
|
||||||
self.__trackType = kwargs[TrackDescriptor.TRACK_TYPE_KEY]
|
|
||||||
else:
|
|
||||||
self.__trackType = TrackType.UNKNOWN
|
|
||||||
|
|
||||||
if TrackDescriptor.CODEC_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.CODEC_KEY]) is not TrackCodec:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.CODEC_KEY} is required to be of type TrackCodec"
|
|
||||||
)
|
|
||||||
self.__trackCodec = kwargs[TrackDescriptor.CODEC_KEY]
|
|
||||||
else:
|
|
||||||
self.__trackCodec = TrackCodec.UNKNOWN
|
|
||||||
|
|
||||||
if TrackDescriptor.TAGS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.TAGS_KEY]) is not dict:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.TAGS_KEY} is required to be of type dict"
|
|
||||||
)
|
|
||||||
self.__trackTags = kwargs[TrackDescriptor.TAGS_KEY]
|
|
||||||
else:
|
|
||||||
self.__trackTags = {}
|
|
||||||
|
|
||||||
if TrackDescriptor.DISPOSITION_SET_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.DISPOSITION_SET_KEY]) is not set:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.DISPOSITION_SET_KEY} is required to be of type set"
|
|
||||||
)
|
|
||||||
for d in kwargs[TrackDescriptor.DISPOSITION_SET_KEY]:
|
|
||||||
if type(d) is not TrackDisposition:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): All elements of argument set {TrackDescriptor.DISPOSITION_SET_KEY} is required to be of type TrackDisposition"
|
|
||||||
)
|
|
||||||
self.__dispositionSet = kwargs[TrackDescriptor.DISPOSITION_SET_KEY]
|
|
||||||
else:
|
|
||||||
self.__dispositionSet = set()
|
|
||||||
|
|
||||||
if TrackDescriptor.AUDIO_LAYOUT_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY]) is not AudioLayout:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.AUDIO_LAYOUT_KEY} is required to be of type AudioLayout"
|
|
||||||
)
|
|
||||||
self.__audioLayout = kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY]
|
|
||||||
else:
|
|
||||||
self.__audioLayout = AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def fromFfprobe(cls, streamObj, subIndex: int = -1):
|
|
||||||
"""Processes ffprobe stream data as array with elements according to the following example
|
|
||||||
{
|
|
||||||
"index": 4,
|
|
||||||
"codec_name": "hdmv_pgs_subtitle",
|
|
||||||
"codec_long_name": "HDMV Presentation Graphic Stream subtitles",
|
|
||||||
"codec_type": "subtitle",
|
|
||||||
"codec_tag_string": "[0][0][0][0]",
|
|
||||||
"codec_tag": "0x0000",
|
|
||||||
"r_frame_rate": "0/0",
|
|
||||||
"avg_frame_rate": "0/0",
|
|
||||||
"time_base": "1/1000",
|
|
||||||
"start_pts": 0,
|
|
||||||
"start_time": "0.000000",
|
|
||||||
"duration_ts": 1421035,
|
|
||||||
"duration": "1421.035000",
|
|
||||||
"disposition": {
|
|
||||||
"default": 1,
|
|
||||||
"dub": 0,
|
|
||||||
"original": 0,
|
|
||||||
"comment": 0,
|
|
||||||
"lyrics": 0,
|
|
||||||
"karaoke": 0,
|
|
||||||
"forced": 0,
|
|
||||||
"hearing_impaired": 0,
|
|
||||||
"visual_impaired": 0,
|
|
||||||
"clean_effects": 0,
|
|
||||||
"attached_pic": 0,
|
|
||||||
"timed_thumbnails": 0,
|
|
||||||
"non_diegetic": 0,
|
|
||||||
"captions": 0,
|
|
||||||
"descriptions": 0,
|
|
||||||
"metadata": 0,
|
|
||||||
"dependent": 0,
|
|
||||||
"still_image": 0
|
|
||||||
},
|
|
||||||
"tags": {
|
|
||||||
"language": "ger",
|
|
||||||
"title": "German Full"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
trackType = (
|
|
||||||
TrackType.fromLabel(streamObj["codec_type"])
|
|
||||||
if "codec_type" in streamObj.keys()
|
|
||||||
else TrackType.UNKNOWN
|
|
||||||
)
|
|
||||||
|
|
||||||
if trackType != TrackType.UNKNOWN:
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = (
|
|
||||||
int(streamObj[TrackDescriptor.FFPROBE_INDEX_KEY])
|
|
||||||
if TrackDescriptor.FFPROBE_INDEX_KEY in streamObj.keys()
|
|
||||||
else -1
|
|
||||||
)
|
|
||||||
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = kwargs[TrackDescriptor.INDEX_KEY]
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = subIndex
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = trackType
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.CODEC_KEY] = TrackCodec.identify(streamObj[TrackDescriptor.FFPROBE_CODEC_KEY])
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = (
|
|
||||||
{
|
|
||||||
t
|
|
||||||
for d in (
|
|
||||||
k
|
|
||||||
for (k, v) in streamObj[
|
|
||||||
TrackDescriptor.FFPROBE_DISPOSITION_KEY
|
|
||||||
].items()
|
|
||||||
if v
|
|
||||||
)
|
|
||||||
if (t := TrackDisposition.find(d)) is not None
|
|
||||||
}
|
|
||||||
if TrackDescriptor.FFPROBE_DISPOSITION_KEY in streamObj.keys()
|
|
||||||
else set()
|
|
||||||
)
|
|
||||||
kwargs[TrackDescriptor.TAGS_KEY] = (
|
|
||||||
streamObj[TrackDescriptor.FFPROBE_TAGS_KEY]
|
|
||||||
if TrackDescriptor.FFPROBE_TAGS_KEY in streamObj.keys()
|
|
||||||
else {}
|
|
||||||
)
|
|
||||||
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = (
|
|
||||||
AudioLayout.identify(streamObj)
|
|
||||||
if trackType == TrackType.AUDIO
|
|
||||||
else AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
)
|
|
||||||
|
|
||||||
return cls(**kwargs)
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return self.__trackId
|
|
||||||
|
|
||||||
def getPatternId(self):
|
|
||||||
return self.__patternId
|
|
||||||
|
|
||||||
def getIndex(self):
|
|
||||||
return self.__index
|
|
||||||
|
|
||||||
def setIndex(self, index):
|
|
||||||
self.__index = index
|
|
||||||
|
|
||||||
def getSourceIndex(self):
|
|
||||||
return self.__sourceIndex
|
|
||||||
|
|
||||||
def setSourceIndex(self, sourceIndex: int):
|
|
||||||
self.__sourceIndex = int(sourceIndex)
|
|
||||||
|
|
||||||
def getSubIndex(self):
|
|
||||||
return self.__subIndex
|
|
||||||
|
|
||||||
def setSubIndex(self, subIndex):
|
|
||||||
self.__subIndex = subIndex
|
|
||||||
|
|
||||||
def getType(self):
|
|
||||||
return self.__trackType
|
|
||||||
|
|
||||||
def getCodec(self) -> TrackCodec:
|
|
||||||
return self.__trackCodec
|
|
||||||
|
|
||||||
def getLanguage(self):
|
|
||||||
if "language" in self.__trackTags.keys():
|
|
||||||
return IsoLanguage.findThreeLetter(self.__trackTags["language"])
|
|
||||||
else:
|
|
||||||
return IsoLanguage.UNDEFINED
|
|
||||||
|
|
||||||
def setLanguage(self, language: IsoLanguage):
|
|
||||||
if not type(language) is IsoLanguage:
|
|
||||||
raise TypeError('language has to be of type IsoLanguage')
|
|
||||||
self.__trackTags["language"] = language
|
|
||||||
|
|
||||||
def getTitle(self):
|
|
||||||
if "title" in self.__trackTags.keys():
|
|
||||||
return str(self.__trackTags["title"])
|
|
||||||
else:
|
|
||||||
return ""
|
|
||||||
|
|
||||||
def setTitle(self, title: str):
|
|
||||||
self.__trackTags["title"] = str(title)
|
|
||||||
|
|
||||||
|
|
||||||
def getAudioLayout(self):
|
|
||||||
return self.__audioLayout
|
|
||||||
|
|
||||||
def getTags(self):
|
|
||||||
return self.__trackTags
|
|
||||||
|
|
||||||
def getDispositionSet(self):
|
|
||||||
return self.__dispositionSet
|
|
||||||
|
|
||||||
def setDispositionSet(self, dispositionSet: set):
|
|
||||||
self.__dispositionSet = dispositionSet
|
|
||||||
|
|
||||||
def getDispositionFlag(self, disposition: TrackDisposition) -> bool:
|
|
||||||
return bool(disposition in self.__dispositionSet)
|
|
||||||
|
|
||||||
def setDispositionFlag(self, disposition: TrackDisposition, state: bool):
|
|
||||||
if state:
|
|
||||||
self.__dispositionSet.add(disposition)
|
|
||||||
else:
|
|
||||||
self.__dispositionSet.discard(disposition)
|
|
||||||
|
|
||||||
# def compare(self, vsTrackDescriptor: Self):
|
|
||||||
#
|
|
||||||
# compareResult = {}
|
|
||||||
#
|
|
||||||
# tagsDiffResult = dictKeysDiff(vsTrackDescriptor.getTags(), self.getTags())
|
|
||||||
#
|
|
||||||
# if tagsDiffResult:
|
|
||||||
# compareResult[TrackDescriptor.TAGS_KEY] = tagsDiffResult
|
|
||||||
#
|
|
||||||
# vsDispositions = vsTrackDescriptor.getDispositionSet()
|
|
||||||
# dispositions = self.getDispositionSet()
|
|
||||||
#
|
|
||||||
# dispositionDiffResult = setDiff(vsDispositions, dispositions)
|
|
||||||
#
|
|
||||||
# if dispositionDiffResult:
|
|
||||||
# compareResult[TrackDescriptor.DISPOSITION_SET_KEY] = dispositionDiffResult
|
|
||||||
#
|
|
||||||
# return compareResult
|
|
||||||
|
|
||||||
def setExternalSourceFilePath(self, filePath: str):
|
|
||||||
self.__externalSourceFilePath = str(filePath)
|
|
||||||
|
|
||||||
def getExternalSourceFilePath(self):
|
|
||||||
return self.__externalSourceFilePath
|
|
||||||
@@ -1,457 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, SelectionList, Select, DataTable, Input
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
|
|
||||||
from .track_controller import TrackController
|
|
||||||
from .pattern_controller import PatternController
|
|
||||||
from .tag_controller import TagController
|
|
||||||
|
|
||||||
from .track_type import TrackType
|
|
||||||
from .track_codec import TrackCodec
|
|
||||||
|
|
||||||
from .iso_language import IsoLanguage
|
|
||||||
from .track_disposition import TrackDisposition
|
|
||||||
from .audio_layout import AudioLayout
|
|
||||||
|
|
||||||
from .track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
from .tag_details_screen import TagDetailsScreen
|
|
||||||
from .tag_delete_screen import TagDeleteScreen
|
|
||||||
|
|
||||||
from textual.widgets._data_table import CellDoesNotExist
|
|
||||||
|
|
||||||
from ffx.helper import formatRichColor, removeRichColor
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class TrackDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 5 24;
|
|
||||||
grid-rows: 2 2 2 2 2 3 3 2 2 3 2 2 2 2 2 6 2 2 6 2 2 2;
|
|
||||||
grid-columns: 25 25 25 25 125;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
SelectionList {
|
|
||||||
border: none;
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
Select {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
|
|
||||||
.yellow {
|
|
||||||
tint: yellow 40%;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, trackDescriptor : TrackDescriptor = None, patternId = None, trackType : TrackType = None, index = None, subIndex = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__configurationData = self.context['config'].getData()
|
|
||||||
|
|
||||||
metadataConfiguration = self.__configurationData['metadata'] if 'metadata' in self.__configurationData.keys() else {}
|
|
||||||
|
|
||||||
self.__signatureTags = metadataConfiguration['signature'] if 'signature' in metadataConfiguration.keys() else {}
|
|
||||||
self.__removeGlobalKeys = metadataConfiguration['remove'] if 'remove' in metadataConfiguration.keys() else []
|
|
||||||
self.__ignoreGlobalKeys = metadataConfiguration['ignore'] if 'ignore' in metadataConfiguration.keys() else []
|
|
||||||
self.__removeTrackKeys = (metadataConfiguration['streams']['remove']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'remove' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
self.__ignoreTrackKeys = (metadataConfiguration['streams']['ignore']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'ignore' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
|
|
||||||
|
|
||||||
self.__tc = TrackController(context = self.context)
|
|
||||||
self.__pc = PatternController(context = self.context)
|
|
||||||
self.__tac = TagController(context = self.context)
|
|
||||||
|
|
||||||
self.__isNew = trackDescriptor is None
|
|
||||||
if self.__isNew:
|
|
||||||
self.__trackType = trackType
|
|
||||||
self.__trackCodec = TrackCodec.UNKNOWN
|
|
||||||
self.__audioLayout = AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
self.__index = index
|
|
||||||
self.__subIndex = subIndex
|
|
||||||
self.__trackDescriptor : TrackDescriptor = None
|
|
||||||
self.__pattern : Pattern = self.__pc.getPattern(patternId) if patternId is not None else {}
|
|
||||||
else:
|
|
||||||
self.__trackType = trackDescriptor.getType()
|
|
||||||
self.__trackCodec = trackDescriptor.getCodec()
|
|
||||||
self.__audioLayout = trackDescriptor.getAudioLayout()
|
|
||||||
self.__index = trackDescriptor.getIndex()
|
|
||||||
self.__subIndex = trackDescriptor.getSubIndex()
|
|
||||||
self.__trackDescriptor : TrackDescriptor = trackDescriptor
|
|
||||||
self.__pattern : Pattern = self.__pc.getPattern(self.__trackDescriptor.getPatternId())
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def updateTags(self):
|
|
||||||
|
|
||||||
self.trackTagsTable.clear()
|
|
||||||
|
|
||||||
trackId = self.__trackDescriptor.getId()
|
|
||||||
|
|
||||||
if trackId != -1:
|
|
||||||
|
|
||||||
trackTags = self.__tac.findAllTrackTags(trackId)
|
|
||||||
|
|
||||||
for k,v in trackTags.items():
|
|
||||||
|
|
||||||
if k != 'language' and k != 'title':
|
|
||||||
|
|
||||||
textColor = None
|
|
||||||
if k in self.__ignoreTrackKeys:
|
|
||||||
textColor = 'blue'
|
|
||||||
if k in self.__removeTrackKeys:
|
|
||||||
textColor = 'red'
|
|
||||||
|
|
||||||
row = (formatRichColor(k, textColor), formatRichColor(v, textColor))
|
|
||||||
self.trackTagsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
self.query_one("#index_label", Static).update(str(self.__index) if self.__index is not None else '-')
|
|
||||||
self.query_one("#subindex_label", Static).update(str(self.__subIndex)if self.__subIndex is not None else '-')
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
self.query_one("#pattern_label", Static).update(self.__pattern.getPattern())
|
|
||||||
|
|
||||||
if self.__trackType is not None:
|
|
||||||
self.query_one("#type_select", Select).value = self.__trackType.label()
|
|
||||||
if self.__trackType == TrackType.AUDIO:
|
|
||||||
self.query_one("#audio_layout_select", Select).value = self.__audioLayout.label()
|
|
||||||
|
|
||||||
for d in TrackDisposition:
|
|
||||||
|
|
||||||
dispositionIsSet = (self.__trackDescriptor is not None
|
|
||||||
and d in self.__trackDescriptor.getDispositionSet())
|
|
||||||
|
|
||||||
dispositionOption = (d.label(), d.index(), dispositionIsSet)
|
|
||||||
self.query_one("#dispositions_selection_list", SelectionList).add_option(dispositionOption)
|
|
||||||
|
|
||||||
if self.__trackDescriptor is not None:
|
|
||||||
|
|
||||||
self.query_one("#language_select", Select).value = self.__trackDescriptor.getLanguage().label()
|
|
||||||
self.query_one("#title_input", Input).value = self.__trackDescriptor.getTitle()
|
|
||||||
self.updateTags()
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
self.trackTagsTable = DataTable(classes="five")
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_track_tag_key = self.trackTagsTable.add_column("Key", width=50)
|
|
||||||
self.column_key_track_tag_value = self.trackTagsTable.add_column("Value", width=100)
|
|
||||||
|
|
||||||
self.trackTagsTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
languages = [l.label() for l in IsoLanguage]
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
# 1
|
|
||||||
yield Static(f"New stream" if self.__isNew else f"Edit stream", id="toplabel", classes="five")
|
|
||||||
|
|
||||||
# 2
|
|
||||||
yield Static("for pattern")
|
|
||||||
yield Static("", id="pattern_label", classes="four")
|
|
||||||
|
|
||||||
# 3
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 4
|
|
||||||
yield Static("Index / Subindex")
|
|
||||||
yield Static("", id="index_label", classes="two")
|
|
||||||
yield Static("", id="subindex_label", classes="two")
|
|
||||||
|
|
||||||
# 5
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 6
|
|
||||||
yield Static("Type")
|
|
||||||
yield Select.from_values([t.label() for t in TrackType], classes="four", id="type_select")
|
|
||||||
|
|
||||||
# 7
|
|
||||||
if self.__trackType == TrackType.AUDIO:
|
|
||||||
yield Static("Audio Layout")
|
|
||||||
yield Select.from_values([t.label() for t in AudioLayout], classes="four", id="audio_layout_select")
|
|
||||||
else:
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 8
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 9
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 10
|
|
||||||
yield Static("Language")
|
|
||||||
yield Select.from_values(languages, classes="four", id="language_select")
|
|
||||||
# 11
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 12
|
|
||||||
yield Static("Title")
|
|
||||||
yield Input(id="title_input", classes="four")
|
|
||||||
|
|
||||||
# 13
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 14
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 15
|
|
||||||
yield Static("Stream tags")
|
|
||||||
yield Static(" ")
|
|
||||||
yield Button("Add", id="button_add_stream_tag")
|
|
||||||
yield Button("Edit", id="button_edit_stream_tag")
|
|
||||||
yield Button("Delete", id="button_delete_stream_tag")
|
|
||||||
# 16
|
|
||||||
yield self.trackTagsTable
|
|
||||||
|
|
||||||
# 17
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 18
|
|
||||||
yield Static("Stream dispositions", classes="five")
|
|
||||||
|
|
||||||
# 19
|
|
||||||
yield SelectionList[int](
|
|
||||||
classes="five",
|
|
||||||
id = "dispositions_selection_list"
|
|
||||||
)
|
|
||||||
|
|
||||||
# 20
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
# 21
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 22
|
|
||||||
yield Button("Save", id="save_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
# 23
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 24
|
|
||||||
yield Static(" ", classes="five", id="messagestatic")
|
|
||||||
|
|
||||||
|
|
||||||
yield Footer(id="footer")
|
|
||||||
|
|
||||||
|
|
||||||
def getTrackDescriptorFromInput(self):
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.CONTEXT_KEY] = self.context
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.PATTERN_ID_KEY] = int(self.__pattern.getId())
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = self.__index
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = self.__subIndex #!
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.fromLabel(self.query_one("#type_select", Select).value)
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.CODEC_KEY] = self.__trackCodec
|
|
||||||
|
|
||||||
if self.__trackType == TrackType.AUDIO:
|
|
||||||
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = AudioLayout.fromLabel(self.query_one("#audio_layout_select", Select).value)
|
|
||||||
else:
|
|
||||||
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
|
|
||||||
trackTags = {}
|
|
||||||
language = self.query_one("#language_select", Select).value
|
|
||||||
if language:
|
|
||||||
trackTags['language'] = IsoLanguage.find(language).threeLetter()
|
|
||||||
title = self.query_one("#title_input", Input).value
|
|
||||||
if title:
|
|
||||||
trackTags['title'] = title
|
|
||||||
|
|
||||||
tableTags = {row[0]:row[1] for r in self.trackTagsTable.rows if (row := self.trackTagsTable.get_row(r)) and row[0] != 'language' and row[0] != 'title'}
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.TAGS_KEY] = trackTags | tableTags
|
|
||||||
|
|
||||||
dispositionFlags = sum([2**f for f in self.query_one("#dispositions_selection_list", SelectionList).selected])
|
|
||||||
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = TrackDisposition.toSet(dispositionFlags)
|
|
||||||
|
|
||||||
return TrackDescriptor(**kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedTag(self):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.trackTagsTable.coordinate_to_cell_key(self.trackTagsTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_tag_data = self.trackTagsTable.get_row(row_key)
|
|
||||||
|
|
||||||
tagKey = removeRichColor(selected_tag_data[0])
|
|
||||||
tagValue = removeRichColor(selected_tag_data[1])
|
|
||||||
|
|
||||||
return tagKey, tagValue
|
|
||||||
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
# Check if the button pressed is the one we are interested in
|
|
||||||
if event.button.id == "save_button":
|
|
||||||
|
|
||||||
# Check for multiple default/forced disposition flags
|
|
||||||
|
|
||||||
if self.__trackType == TrackType.VIDEO:
|
|
||||||
trackList = self.__tc.findVideoTracks(self.__pattern.getId())
|
|
||||||
if self.__trackType == TrackType.AUDIO:
|
|
||||||
trackList = self.__tc.findAudioTracks(self.__pattern.getId())
|
|
||||||
elif self.__trackType == TrackType.SUBTITLE:
|
|
||||||
trackList = self.__tc.findSubtitleTracks(self.__pattern.getId())
|
|
||||||
else:
|
|
||||||
trackList = []
|
|
||||||
|
|
||||||
siblingTrackList = [t for t in trackList if t.getType() == self.__trackType and t.getIndex() != self.__index]
|
|
||||||
|
|
||||||
numDefaultTracks = len([t for t in siblingTrackList if TrackDisposition.DEFAULT in t.getDispositionSet()])
|
|
||||||
numForcedTracks = len([t for t in siblingTrackList if TrackDisposition.FORCED in t.getDispositionSet()])
|
|
||||||
|
|
||||||
self.__subIndex = len(trackList)
|
|
||||||
trackDescriptor = self.getTrackDescriptorFromInput()
|
|
||||||
|
|
||||||
if ((TrackDisposition.DEFAULT in trackDescriptor.getDispositionSet() and numDefaultTracks)
|
|
||||||
or (TrackDisposition.FORCED in trackDescriptor.getDispositionSet() and numForcedTracks)):
|
|
||||||
|
|
||||||
self.query_one("#messagestatic", Static).update("Cannot add another stream with disposition flag 'debug' or 'forced' set")
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
self.query_one("#messagestatic", Static).update(" ")
|
|
||||||
|
|
||||||
if self.__isNew:
|
|
||||||
|
|
||||||
# Track per Screen hinzufügen
|
|
||||||
self.__tc.addTrack(trackDescriptor)
|
|
||||||
self.dismiss(trackDescriptor)
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
track = self.__tc.getTrack(self.__pattern.getId(), self.__index)
|
|
||||||
|
|
||||||
# Track per details screen updaten
|
|
||||||
if self.__tc.updateTrack(track.getId(), trackDescriptor):
|
|
||||||
self.dismiss(trackDescriptor)
|
|
||||||
|
|
||||||
else:
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "button_add_stream_tag":
|
|
||||||
if not self.__isNew:
|
|
||||||
self.app.push_screen(TagDetailsScreen(), self.handle_update_tag)
|
|
||||||
|
|
||||||
if event.button.id == "button_edit_stream_tag":
|
|
||||||
tagKey, tagValue = self.getSelectedTag()
|
|
||||||
self.app.push_screen(TagDetailsScreen(key=tagKey, value=tagValue), self.handle_update_tag)
|
|
||||||
|
|
||||||
if event.button.id == "button_delete_stream_tag":
|
|
||||||
tagKey, tagValue = self.getSelectedTag()
|
|
||||||
self.app.push_screen(TagDeleteScreen(key=tagKey, value=tagValue), self.handle_delete_tag)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_update_tag(self, tag):
|
|
||||||
|
|
||||||
trackId = self.__trackDescriptor.getId()
|
|
||||||
|
|
||||||
if trackId == -1:
|
|
||||||
raise click.ClickException(f"TrackDetailsScreen.handle_update_tag: trackId not set (-1) trackDescriptor={self.__trackDescriptor}")
|
|
||||||
|
|
||||||
if self.__tac.updateTrackTag(trackId, tag[0], tag[1]) is not None:
|
|
||||||
self.updateTags()
|
|
||||||
|
|
||||||
def handle_delete_tag(self, trackTag):
|
|
||||||
|
|
||||||
trackId = self.__trackDescriptor.getId()
|
|
||||||
|
|
||||||
if trackId == -1:
|
|
||||||
raise click.ClickException(f"TrackDetailsScreen.handle_delete_tag: trackId not set (-1) trackDescriptor={self.__trackDescriptor}")
|
|
||||||
|
|
||||||
tag = self.__tac.findTrackTag(trackId, trackTag[0])
|
|
||||||
|
|
||||||
if tag is not None:
|
|
||||||
if self.__tac.deleteTrackTag(tag.id):
|
|
||||||
self.updateTags()
|
|
||||||
@@ -1,76 +0,0 @@
|
|||||||
import difflib, click
|
|
||||||
|
|
||||||
from enum import Enum
|
|
||||||
|
|
||||||
|
|
||||||
class TrackDisposition(Enum):
|
|
||||||
|
|
||||||
DEFAULT = {"name": "default", "index": 0, "indicator": "DEF"}
|
|
||||||
FORCED = {"name": "forced", "index": 1, "indicator": "FOR"}
|
|
||||||
|
|
||||||
DUB = {"name": "dub", "index": 2, "indicator": "DUB"}
|
|
||||||
ORIGINAL = {"name": "original", "index": 3, "indicator": "ORG"}
|
|
||||||
COMMENT = {"name": "comment", "index": 4, "indicator": "COM"}
|
|
||||||
LYRICS = {"name": "lyrics", "index": 5, "indicator": "LYR"}
|
|
||||||
KARAOKE = {"name": "karaoke", "index": 6, "indicator": "KAR"}
|
|
||||||
HEARING_IMPAIRED = {"name": "hearing_impaired", "index": 7, "indicator": "HIM"}
|
|
||||||
VISUAL_IMPAIRED = {"name": "visual_impaired", "index": 8, "indicator": "VIM"}
|
|
||||||
CLEAN_EFFECTS = {"name": "clean_effects", "index": 9, "indicator": "CLE"}
|
|
||||||
ATTACHED_PIC = {"name": "attached_pic", "index": 10, "indicator": "ATP"}
|
|
||||||
TIMED_THUMBNAILS = {"name": "timed_thumbnails", "index": 11, "indicator": "TTH"}
|
|
||||||
NON_DIEGETICS = {"name": "non_diegetic", "index": 12, "indicator": "NOD"}
|
|
||||||
CAPTIONS = {"name": "captions", "index": 13, "indicator": "CAP"}
|
|
||||||
DESCRIPTIONS = {"name": "descriptions", "index": 14, "indicator": "DES"}
|
|
||||||
METADATA = {"name": "metadata", "index": 15, "indicator": "MED"}
|
|
||||||
DEPENDENT = {"name": "dependent", "index": 16, "indicator": "DEP"}
|
|
||||||
STILL_IMAGE = {"name": "still_image", "index": 17, "indicator": "STI"}
|
|
||||||
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
return str(self.value['name'])
|
|
||||||
|
|
||||||
def index(self):
|
|
||||||
return int(self.value['index'])
|
|
||||||
|
|
||||||
def indicator(self):
|
|
||||||
return str(self.value['indicator'])
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def toFlags(dispositionSet):
|
|
||||||
"""Flags stored in integer bits (2**index)"""
|
|
||||||
|
|
||||||
if type(dispositionSet) is not set:
|
|
||||||
raise click.ClickException('TrackDisposition.toFlags(): Argument is not of type set')
|
|
||||||
|
|
||||||
flags = 0
|
|
||||||
for d in dispositionSet:
|
|
||||||
if type(d) is not TrackDisposition:
|
|
||||||
raise click.ClickException('TrackDisposition.toFlags(): Element not of type TrackDisposition')
|
|
||||||
flags += 2 ** d.index()
|
|
||||||
return flags
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def toSet(flags):
|
|
||||||
dispositionSet = set()
|
|
||||||
for d in TrackDisposition:
|
|
||||||
if flags & int(2 ** d.index()):
|
|
||||||
dispositionSet.add(d)
|
|
||||||
return dispositionSet
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def find(label):
|
|
||||||
matchingDispositions = [d for d in TrackDisposition if d.label() == str(label)]
|
|
||||||
if matchingDispositions:
|
|
||||||
return matchingDispositions[0]
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromIndicator(indicator: str):
|
|
||||||
matchingDispositions = [d for d in TrackDisposition if d.indicator() == str(indicator)]
|
|
||||||
if matchingDispositions:
|
|
||||||
return matchingDispositions[0]
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
from enum import Enum
|
|
||||||
|
|
||||||
class TrackType(Enum):
|
|
||||||
|
|
||||||
VIDEO = {'label': 'video', 'index': 1}
|
|
||||||
AUDIO = {'label': 'audio', 'index': 2}
|
|
||||||
SUBTITLE = {'label': 'subtitle', 'index': 3}
|
|
||||||
|
|
||||||
UNKNOWN = {'label': 'unknown', 'index': 0}
|
|
||||||
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
"""Returns the stream type as string"""
|
|
||||||
return str(self.value['label'])
|
|
||||||
|
|
||||||
def indicator(self):
|
|
||||||
"""Returns the stream type as single letter"""
|
|
||||||
return self.label()[0]
|
|
||||||
|
|
||||||
def index(self):
|
|
||||||
"""Returns the stream type index"""
|
|
||||||
return int(self.value['index'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromLabel(label : str):
|
|
||||||
tlist = [t for t in TrackType if t.value['label'] == str(label)]
|
|
||||||
if tlist:
|
|
||||||
return tlist[0]
|
|
||||||
else:
|
|
||||||
return TrackType.UNKNOWN
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromIndex(index : int):
|
|
||||||
tlist = [t for t in TrackType if t.value['index'] == int(index)]
|
|
||||||
if tlist:
|
|
||||||
return tlist[0]
|
|
||||||
else:
|
|
||||||
return TrackType.UNKNOWN
|
|
||||||
@@ -1,33 +0,0 @@
|
|||||||
from enum import Enum
|
|
||||||
|
|
||||||
class VideoEncoder(Enum):
|
|
||||||
|
|
||||||
AV1 = {'label': 'av1', 'index': 1}
|
|
||||||
VP9 = {'label': 'vp9', 'index': 2}
|
|
||||||
H264 = {'label': 'h264', 'index': 3}
|
|
||||||
|
|
||||||
UNDEFINED = {'label': 'undefined', 'index': 0}
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
"""Returns the stream type as string"""
|
|
||||||
return str(self.value['label'])
|
|
||||||
|
|
||||||
def index(self):
|
|
||||||
"""Returns the stream type index"""
|
|
||||||
return int(self.value['index'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromLabel(label : str):
|
|
||||||
tlist = [t for t in VideoEncoder if t.value['label'] == str(label)]
|
|
||||||
if tlist:
|
|
||||||
return tlist[0]
|
|
||||||
else:
|
|
||||||
return VideoEncoder.UNDEFINED
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromIndex(index : int):
|
|
||||||
tlist = [t for t in VideoEncoder if t.value['index'] == int(index)]
|
|
||||||
if tlist:
|
|
||||||
return tlist[0]
|
|
||||||
else:
|
|
||||||
return VideoEncoder.UNDEFINED
|
|
||||||
28
guidance/workflow/optional/lean-interface-iteration.md
Normal file
28
guidance/workflow/optional/lean-interface-iteration.md
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
# Lean Interface Iteration
|
||||||
|
|
||||||
|
Rule set name: `lean-interface-iteration`
|
||||||
|
|
||||||
|
Rule set ID: `LII`
|
||||||
|
|
||||||
|
Status: optional, prompt-activated only
|
||||||
|
|
||||||
|
Trigger examples:
|
||||||
|
|
||||||
|
- `Apply the lean-interface-iteration rules.`
|
||||||
|
- `Apply LII rules.`
|
||||||
|
|
||||||
|
LII-0001: Apply this rule set only when it is explicitly requested in the prompt.
|
||||||
|
|
||||||
|
LII-0002: The target of work under this rule set is the iterated product state for the addressed iteration only.
|
||||||
|
|
||||||
|
LII-0003: Optimize the addressed interface toward the leanest and least complex model that still satisfies the iteration order.
|
||||||
|
|
||||||
|
LII-0004: Backward compatibility, legacy aliases, and compatibility shims are not required unless the prompt explicitly asks to preserve them.
|
||||||
|
|
||||||
|
LII-0005: Prefer one authoritative interface over multiple overlapping parameters, flags, or naming variants.
|
||||||
|
|
||||||
|
LII-0006: Remove or avoid transitional interface layers when they are not required by the addressed iteration order.
|
||||||
|
|
||||||
|
LII-0007: Update affected tests, guidance, requirements, and documentation so they describe the simplified interface model rather than a mixed legacy-and-new model.
|
||||||
|
|
||||||
|
LII-0008: Never change behavior, interfaces, or surrounding areas that are not addressed by the current iteration order.
|
||||||
56
guidance/workflow/optional/preparation-script-design.md
Normal file
56
guidance/workflow/optional/preparation-script-design.md
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
# Preparation Script Design
|
||||||
|
|
||||||
|
Rule set name: `preparation-script-design`
|
||||||
|
|
||||||
|
Rule set ID: `PSD`
|
||||||
|
|
||||||
|
Status: optional, prompt-activated only
|
||||||
|
|
||||||
|
Trigger examples:
|
||||||
|
|
||||||
|
- `Apply the preparation-script-design rules.`
|
||||||
|
- `Apply PSD rules.`
|
||||||
|
|
||||||
|
PSD-0001: Apply this rule set only when it is explicitly requested in the prompt.
|
||||||
|
|
||||||
|
PSD-0002: Use this rule set for scripts whose purpose is to prepare, verify, or expose a local development or automation environment rather than to perform product runtime behavior.
|
||||||
|
|
||||||
|
PSD-0003: Keep a preparation script focused on environment readiness, dependency installation, local helper exposure, and clear verification output; do not mix unrelated product logic into the script.
|
||||||
|
|
||||||
|
PSD-0004: Design the script to be idempotent so repeated runs converge on the same prepared state without unnecessary reinstallation or destructive side effects.
|
||||||
|
|
||||||
|
PSD-0005: Provide a verification-only mode such as `--check` that reports readiness without installing, modifying, or creating dependencies.
|
||||||
|
|
||||||
|
PSD-0006: Separate component checks from installation steps so the script can report what is missing before or after attempted remediation.
|
||||||
|
|
||||||
|
PSD-0007: Group required capabilities into clear purpose-oriented sections such as support toolchains, local package bundles, generated environment helpers, or other relevant readiness areas instead of presenting one undifferentiated dependency list.
|
||||||
|
|
||||||
|
PSD-0008: Prefer explicit per-component check helpers over opaque one-shot checks so failures remain traceable and easy to extend.
|
||||||
|
|
||||||
|
PSD-0009: Generate or update environment helper files only when they provide a stable, reusable way to expose repo-local or workspace-local tools, paths, or environment variables.
|
||||||
|
|
||||||
|
PSD-0010: Generated environment helper files shall be safe to source multiple times and should avoid duplicating path entries or clobbering unrelated user environment state.
|
||||||
|
|
||||||
|
PSD-0011: When a preparation flow seeds optional user-owned files such as config templates, do so non-destructively by creating them only when absent unless the prompt explicitly requests overwrite behavior.
|
||||||
|
|
||||||
|
PSD-0012: Report status in a concise scan-friendly line format of the shape `[status] Label: detail`, where the label names the checked component and the detail string stays short and specific.
|
||||||
|
|
||||||
|
PSD-0013: Prefer a small canonical status vocabulary in those report lines, with `ok` for satisfied checks, `warn` for non-blocking gaps, and a failure status such as `failed` for blocking or unsuccessful states.
|
||||||
|
|
||||||
|
PSD-0014: When a preparation script uses terminal colors in its status output, apply a consistent severity mapping so `ok` is green, `warn` is yellow, and all other status levels are red.
|
||||||
|
|
||||||
|
PSD-0015: In bracketed status markers such as `[ok]` or `[warn]`, keep the square brackets uncolored and apply the severity color only to the inner status text.
|
||||||
|
|
||||||
|
PSD-0016: Colorized status output shall degrade safely in non-terminal or non-color contexts so the script remains readable and automation-friendly without ANSI support.
|
||||||
|
|
||||||
|
PSD-0017: End with an explicit readiness conclusion that distinguishes between successful preparation, incomplete prerequisites, and failed installation attempts.
|
||||||
|
|
||||||
|
PSD-0018: Installation logic should use the narrowest supported platform-specific package-manager actions necessary for the declared scope and should fail clearly when no supported installation path is available.
|
||||||
|
|
||||||
|
PSD-0019: Treat repo-local helper tooling and local package installation boundaries explicitly rather than assuming global installs, especially when the prepared environment is intended to be reproducible.
|
||||||
|
|
||||||
|
PSD-0020: Keep the script suitable for both interactive local developer use and non-interactive automation checks by avoiding prompts during normal execution unless the prompt explicitly requires interactivity.
|
||||||
|
|
||||||
|
PSD-0021: When a script depends on generated helper files or adjacent validation helpers, update those supporting files only as needed to keep the preparation flow coherent and usable.
|
||||||
|
|
||||||
|
PSD-0022: Verify shell syntax after changes and, when feasible, run a dry readiness check so the resulting preparation flow is validated rather than only written.
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
[project]
|
[project]
|
||||||
name = "ffx"
|
name = "ffx"
|
||||||
description = "FFX recoding and metadata managing tool"
|
description = "FFX recoding and metadata managing tool"
|
||||||
version = "0.2.3"
|
version = "0.2.4"
|
||||||
license = {file = "LICENSE.md"}
|
license = {file = "LICENSE.md"}
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"requests",
|
"requests",
|
||||||
@@ -27,6 +27,11 @@ Homepage = "https://gitea.maveno.de/Javanaut/ffx"
|
|||||||
Repository = "https://gitea.maveno.de/Javanaut/ffx.git"
|
Repository = "https://gitea.maveno.de/Javanaut/ffx.git"
|
||||||
Issues = "https://gitea.maveno.de/Javanaut/ffx/issues"
|
Issues = "https://gitea.maveno.de/Javanaut/ffx/issues"
|
||||||
|
|
||||||
|
[project.optional-dependencies]
|
||||||
|
test = [
|
||||||
|
"pytest",
|
||||||
|
]
|
||||||
|
|
||||||
[build-system]
|
[build-system]
|
||||||
requires = [
|
requires = [
|
||||||
"setuptools",
|
"setuptools",
|
||||||
@@ -35,4 +40,15 @@ requires = [
|
|||||||
build-backend = "setuptools.build_meta"
|
build-backend = "setuptools.build_meta"
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
ffx = "ffx.ffx:ffx"
|
ffx = "ffx.cli:ffx"
|
||||||
|
|
||||||
|
[tool.pytest.ini_options]
|
||||||
|
testpaths = ["tests"]
|
||||||
|
python_files = ["test_*.py"]
|
||||||
|
norecursedirs = ["tests/legacy", "tests/support"]
|
||||||
|
addopts = "-ra"
|
||||||
|
markers = [
|
||||||
|
"integration: exercises the FFX bundle with real ffmpeg/ffprobe processes",
|
||||||
|
"pattern_management: covers requirements/pattern_management.md",
|
||||||
|
"subtrack_mapping: covers requirements/subtrack_mapping.md",
|
||||||
|
]
|
||||||
|
|||||||
98
requirements/architecture.md
Normal file
98
requirements/architecture.md
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
# Architecture
|
||||||
|
|
||||||
|
## Architecture Goals
|
||||||
|
|
||||||
|
- Keep the tool small, local, and easy to reason about.
|
||||||
|
- Separate media inspection, stored normalization rules, and conversion execution clearly enough that users can inspect and adjust behavior.
|
||||||
|
- Favor explicit local state and deterministic rule application over opaque automation.
|
||||||
|
- Make external runtime dependencies and platform assumptions visible.
|
||||||
|
|
||||||
|
## System Context
|
||||||
|
|
||||||
|
- Primary actors:
|
||||||
|
- Local operator running the CLI.
|
||||||
|
- Local operator using the Textual TUI to inspect files and maintain rules.
|
||||||
|
- External systems:
|
||||||
|
- `ffprobe` for media introspection.
|
||||||
|
- `ffmpeg` for conversion and extraction.
|
||||||
|
- TMDB API for optional show and episode metadata.
|
||||||
|
- Local filesystem for source media, generated outputs, subtitles, logs, config, and database files.
|
||||||
|
- Data entering the system:
|
||||||
|
- Media container and stream metadata from source files.
|
||||||
|
- Regex patterns and per-show normalization rules entered in the TUI.
|
||||||
|
- Optional config values from `~/.local/etc/ffx.json`.
|
||||||
|
- Optional TMDB identifiers and CLI overrides.
|
||||||
|
- Optional external subtitle files.
|
||||||
|
- Data leaving the system:
|
||||||
|
- Normalized output media files.
|
||||||
|
- Extracted stream files from unmux operations.
|
||||||
|
- SQLite rows representing shows, patterns, tracks, tags, shifted seasons, and properties.
|
||||||
|
- Local log output and console messages.
|
||||||
|
|
||||||
|
## High-Level Building Blocks
|
||||||
|
|
||||||
|
- Frontend, CLI, API, or worker:
|
||||||
|
- A Click-based CLI in [`src/ffx/cli.py`](/home/osgw/.local/src/codex/ffx/src/ffx/cli.py), exposed as the `ffx` command and via `python -m ffx`, including lightweight maintenance wrappers for bundle setup, workstation preparation, and upgrade tasks.
|
||||||
|
- A Textual terminal UI rooted in [`src/ffx/ffx_app.py`](/home/osgw/.local/src/codex/ffx/src/ffx/ffx_app.py) with screens for shows, patterns, file inspection, tracks, tags, and shifted seasons.
|
||||||
|
- Core business logic:
|
||||||
|
- Descriptor objects model media files, shows, and tracks.
|
||||||
|
- Controllers encapsulate CRUD operations and workflow orchestration for shows, patterns, tags, tracks, season shifts, configuration, and conversion.
|
||||||
|
- `MediaDescriptorChangeSet` computes differences between a file and its stored target schema to drive metadata and disposition updates.
|
||||||
|
- File inspection caches combined `ffprobe` data and crop-detection results per source and sampling window within one process to avoid repeated subprocess work.
|
||||||
|
- Storage:
|
||||||
|
- SQLite via SQLAlchemy ORM, with schema rooted in shows, patterns, tracks, media tags, track tags, shifted seasons, and generic properties.
|
||||||
|
- Ordered schema migrations are loaded dynamically from per-version-step modules under [`src/ffx/model/migration/`](/home/osgw/.local/src/codex/ffx/src/ffx/model/migration/).
|
||||||
|
- A configuration JSON file supplies optional path, metadata-filtering, and filename-template settings.
|
||||||
|
- Integration adapters:
|
||||||
|
- Process execution wrapper for `ffmpeg`, `ffprobe`, `nice`, and `cpulimit`, with explicit disabled states for niceness and CPU limiting, support for both absolute `cpulimit` values and machine-wide percent input, and a combined `cpulimit -- nice -n ... <command>` execution shape when both limits are configured.
|
||||||
|
- HTTP adapter for TMDB via `requests`.
|
||||||
|
|
||||||
|
## Data And Interface Notes
|
||||||
|
|
||||||
|
- Key entities or records:
|
||||||
|
- `Show`: canonical TV show metadata plus digit-formatting rules, optional show-level notes, and an optional show-level encoding-quality fallback.
|
||||||
|
- `Pattern`: regex rule tying filenames to one show and one target media schema.
|
||||||
|
- `Track` and `TrackTag`: persisted target stream records, codec, dispositions, audio layout, and stream-level tags. Detailed source-to-target mapping rules live in `requirements/subtrack_mapping.md`.
|
||||||
|
- `MediaTag`: persisted container-level metadata for a pattern.
|
||||||
|
- `ShiftedSeason`: mapping from source numbering ranges to adjusted season and episode numbers, owned either by a show as fallback or by a pattern as override.
|
||||||
|
- `Property`: internal key-value storage currently used for database versioning.
|
||||||
|
- External interfaces:
|
||||||
|
- CLI commands for conversion, inspection, extraction, and crop detection.
|
||||||
|
- TUI workflows for rule authoring and rule maintenance.
|
||||||
|
- Environment variable `TMDB_API_KEY` for TMDB access.
|
||||||
|
- Config keys `databasePath`, `logDirectory`, and `outputFilenameTemplate`, plus optional metadata-filter rules.
|
||||||
|
- Validation rules:
|
||||||
|
- Only supported media-file extensions are accepted for conversion.
|
||||||
|
- Stored database version must either match the runtime-required version already or have a supported sequential migration path to it.
|
||||||
|
- A normalized descriptor may have at most one default and one forced stream per relevant track type.
|
||||||
|
- Shifted-season ranges are intended not to overlap within the same owner scope and season, and runtime resolution prefers pattern-owned matches over show-owned matches.
|
||||||
|
- TMDB lookups require a show ID and season and episode numbers.
|
||||||
|
- Error-handling approach:
|
||||||
|
- User-facing operational failures are raised as `click.ClickException` or warnings.
|
||||||
|
- Ambiguous default and forced stream states trigger prompts unless `--no-prompt` is set, in which case the command fails fast.
|
||||||
|
- External-process failures and invalid media are surfaced through logs and command errors rather than retries, except for TMDB rate-limit retries.
|
||||||
|
|
||||||
|
## Deployment And Operations
|
||||||
|
|
||||||
|
- Runtime environment:
|
||||||
|
- Local Python environment with the package installed and `ffmpeg`, `ffprobe`, `nice`, and `cpulimit` available on `PATH`.
|
||||||
|
- Deployment shape:
|
||||||
|
- Single-process command execution on demand; no daemon, queue, or network service of its own.
|
||||||
|
- Secrets and configuration handling:
|
||||||
|
- TMDB secret is read from `TMDB_API_KEY`.
|
||||||
|
- User config is read from `~/.local/etc/ffx.json`.
|
||||||
|
- Database path may also be overridden per command via `--database-file`.
|
||||||
|
- Logging and monitoring approach:
|
||||||
|
- File and console logging configured per invocation.
|
||||||
|
- Default log file path is `~/.local/var/log/ffx.log`.
|
||||||
|
- No dedicated monitoring integration is present.
|
||||||
|
|
||||||
|
## Open Technical Questions
|
||||||
|
|
||||||
|
- Question: Should Linux-specific assumptions such as `/dev/null`, `nice`, `cpulimit`, and `~/.local` remain part of the supported-platform contract?
|
||||||
|
- Risk: Portability and operational behavior are underspecified for non-Linux environments.
|
||||||
|
- Next decision needed: Either document Linux-like systems as the official support boundary or refactor the process and path handling for broader portability.
|
||||||
|
|
||||||
|
- Question: Should placeholder TUI surfaces such as settings and help become part of the required product surface or stay explicitly out of scope?
|
||||||
|
- Risk: The UI appears broader than the actually finished feature set.
|
||||||
|
- Next decision needed: Either remove or complete placeholder screens and update requirements accordingly.
|
||||||
68
requirements/pattern_management.md
Normal file
68
requirements/pattern_management.md
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
# Pattern Management
|
||||||
|
|
||||||
|
This file defines the behavioral contract for managing shows, patterns, and
|
||||||
|
pattern-backed filename matching.
|
||||||
|
|
||||||
|
Primary source: actual tool code in `src/ffx/`.
|
||||||
|
Secondary source: operator intent captured in task discussion.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- The show, pattern, and track hierarchy stored in SQLite.
|
||||||
|
- The role of a pattern as a reusable normalization definition for related media files.
|
||||||
|
- Filename-driven assignment of a scanned media file to one show through one matching pattern.
|
||||||
|
- Duplicate-match handling when more than one pattern matches the same filename.
|
||||||
|
|
||||||
|
## Terms
|
||||||
|
|
||||||
|
- `show`: logical series identity such as one TV show entry in the database.
|
||||||
|
- `pattern`: regex-backed normalization definition attached to one show.
|
||||||
|
- `track`: one persisted target-track definition attached to one pattern.
|
||||||
|
- `scanned media file`: one source file currently being inspected or converted.
|
||||||
|
- `duplicate pattern match`: a filename state where more than one stored pattern matches the same scanned media file.
|
||||||
|
- `pattern-backed target schema`: the combination of one pattern's stored media tags and stored track definitions.
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
- `PATTERN_MANAGEMENT-0001`: The domain model shall treat a show as the parent entity for patterns that describe distinct release families or normalization schemas for that show. A show may temporarily exist without patterns during editing or initial TUI creation.
|
||||||
|
- `PATTERN_MANAGEMENT-0002`: Each persisted pattern shall belong to exactly one show.
|
||||||
|
- `PATTERN_MANAGEMENT-0003`: The domain model shall treat a pattern as the reusable normalization definition for a series of media files expected to share the same internal track layout and materially similar stream and container metadata.
|
||||||
|
- `PATTERN_MANAGEMENT-0004`: Each persisted track definition shall belong to exactly one pattern.
|
||||||
|
- `PATTERN_MANAGEMENT-0005`: A pattern may also carry pattern-level media tags. The pattern's media tags plus its track definitions together form the pattern-backed target schema.
|
||||||
|
- `PATTERN_MANAGEMENT-0006`: A scanned media file shall resolve to at most one pattern and therefore at most one show.
|
||||||
|
- `PATTERN_MANAGEMENT-0007`: If no pattern matches a filename, the file shall remain unmatched rather than being assigned implicitly.
|
||||||
|
- `PATTERN_MANAGEMENT-0008`: If more than one pattern matches the same filename, the system shall raise a duplicate pattern match error instead of silently selecting one.
|
||||||
|
- `PATTERN_MANAGEMENT-0009`: Duplicate-match detection shall apply regardless of whether the competing patterns belong to the same show or to different shows.
|
||||||
|
- `PATTERN_MANAGEMENT-0010`: Exact duplicate pattern definitions for the same show should not create multiple persisted pattern rows.
|
||||||
|
- `PATTERN_MANAGEMENT-0011`: A persisted pattern shall define one or more tracks. Creating or retaining a zero-track pattern in the database is invalid managed state and shall be prohibited.
|
||||||
|
- `PATTERN_MANAGEMENT-0012`: A show may exist without patterns as an intermediate editing state, for example when a user creates the show first in the TUI and adds patterns later.
|
||||||
|
- `PATTERN_MANAGEMENT-0013`: Operator-facing pattern management should expose the owning show, regex pattern, stored track set, and stored media-tag set so a user can reason about matching and normalization behavior.
|
||||||
|
- `PATTERN_MANAGEMENT-0014`: Matching semantics shall be deterministic and documented. Implicit "last matching pattern wins" behavior is not acceptable released behavior.
|
||||||
|
|
||||||
|
## Acceptance
|
||||||
|
|
||||||
|
- A filename that matches exactly one pattern yields one matched pattern and one show identity.
|
||||||
|
- A filename that matches no pattern yields no matched pattern and an unmatched state.
|
||||||
|
- A filename that matches more than one pattern yields an explicit duplicate-match error.
|
||||||
|
- A pattern-backed target schema can be reconstructed from one pattern's stored media tags and stored track definitions.
|
||||||
|
- A show may be stored before any patterns are attached to it.
|
||||||
|
- A pattern cannot be stored or retained as a valid managed pattern unless at least one track is defined for it.
|
||||||
|
- Pattern-backed conversion never proceeds with two competing matching patterns for the same input filename.
|
||||||
|
|
||||||
|
## Current Code Fit
|
||||||
|
|
||||||
|
- `src/ffx/model/show.py` implements a one-to-many `Show -> Pattern` relationship.
|
||||||
|
- `src/ffx/model/pattern.py` implements `Pattern.show_id`, a one-to-many `Pattern -> Track` relationship, a one-to-many `Pattern -> MediaTag` relationship, and a unique `(show_id, pattern)` constraint for freshly created databases.
|
||||||
|
- `src/ffx/model/track.py` implements `Track.pattern_id`, so each persisted track belongs to one pattern.
|
||||||
|
- `src/ffx/model/pattern.py` reconstructs a pattern-backed target schema through `Pattern.getMediaDescriptor(...)`, combining stored media tags and stored tracks.
|
||||||
|
- `src/ffx/file_properties.py` assumes a scanned file resolves to at most one pattern, because it stores only one `self.__pattern` and derives one `show_id` from it.
|
||||||
|
- `src/ffx/pattern_controller.py` prevents exact duplicate `(show_id, pattern)` definitions during create and update flows, and it refreshes cached compiled regexes when stored pattern expressions change.
|
||||||
|
- `src/ffx/pattern_controller.py` now complies with duplicate-match safety. `matchFilename(...)` scans deterministically, returns exactly one match, returns `{}` for no match, and raises an explicit duplicate-pattern-match error when more than one pattern matches the same filename.
|
||||||
|
- The current persistence layer already aligns with the intended empty-show workflow because a show can exist without patterns.
|
||||||
|
- New pattern creation and schema replacement flows now require at least one track, and `TrackController.deleteTrack(...)` prevents deleting the last persisted track from a pattern.
|
||||||
|
- Trackless legacy rows can still exist in preexisting databases, but matching now rejects them explicitly instead of letting them participate silently.
|
||||||
|
|
||||||
|
## Risks
|
||||||
|
|
||||||
|
- The intended "release family" meaning of a pattern is a domain assumption, not something the code verifies automatically across all files matching that pattern.
|
||||||
|
- Preexisting databases created before the newer validation rules may still contain invalid rows, so upgrade and cleanup paths should continue to treat explicit validation failures as recoverable operator signals.
|
||||||
124
requirements/project.md
Normal file
124
requirements/project.md
Normal file
@@ -0,0 +1,124 @@
|
|||||||
|
## Purpose And Scope
|
||||||
|
|
||||||
|
- Project name: FFX
|
||||||
|
- User problem: TV episode files from mixed sources arrive with inconsistent codecs, stream metadata, subtitle layouts, season and episode numbering, and output filenames, which makes them awkward to archive and use in media-player applications.
|
||||||
|
- Target users: Individual operators curating a local TV media library on a workstation, especially users willing to define normalization rules per show.
|
||||||
|
- Success outcome: A user can inspect source files, define reusable show and pattern rules, and produce output files whose streams, metadata, and filenames follow a predictable schema for web playback and library import.
|
||||||
|
- Out of scope:
|
||||||
|
- Multi-user or hosted service workflows.
|
||||||
|
- General movie-library management.
|
||||||
|
- Distributed transcoding or remote job orchestration.
|
||||||
|
- Broad media-server administration beyond file preparation.
|
||||||
|
|
||||||
|
## Required Product
|
||||||
|
|
||||||
|
- Deliverable type: Installable Python command-line application with a Textual terminal UI for inspection and rule editing.
|
||||||
|
- Core capabilities:
|
||||||
|
- Maintain an SQLite-backed database of shows, filename-matching patterns, per-pattern stream layouts and metadata tags, and optional season-shift rules.
|
||||||
|
- Inspect existing media files through `ffprobe` and compare discovered stream metadata with stored normalization rules.
|
||||||
|
- Convert media files through `ffmpeg` into a normalized output layout, including video recoding, audio transcoding to Opus, metadata cleanup and rewrite, and controlled disposition flags.
|
||||||
|
- Build output filenames from detected or configured show, season, and episode information, optionally enriched from TMDB and a configurable Jinja-style filename template.
|
||||||
|
- Support auxiliary file operations such as subtitle import, unmuxing, crop detection, rename-only conversion runs, and direct in-place episode renaming.
|
||||||
|
- Supported environments:
|
||||||
|
- Local execution on a Python-capable workstation.
|
||||||
|
- Best-supported on Linux-like systems because the implementation assumes `~/.local`, `/dev/null`, `nice`, and `cpulimit`.
|
||||||
|
- Requires `ffmpeg`, `ffprobe`, and `cpulimit` on `PATH`.
|
||||||
|
- Operational owner: The local user running the tool and maintaining its config, database, and external tooling.
|
||||||
|
|
||||||
|
## Suggested User Stories
|
||||||
|
|
||||||
|
- As a library maintainer, I want to define show-specific matching rules once so that future source files can be normalized automatically.
|
||||||
|
- As an operator, I want to inspect a file before conversion so that I can compare its actual streams and tags against the stored target schema.
|
||||||
|
- As a user preparing web-playback files, I want to recode video and audio with a small set of predictable options so that results are compatible and consistently named.
|
||||||
|
- As a user dealing with nonstandard releases, I want CLI overrides for language, title, stream order, default and forced tracks, and season and episode data so that one-off fixes do not require database edits first.
|
||||||
|
- As a user importing anime or other shifted numbering schemes, I want season and episode offsets at the show level with optional pattern-specific overrides so that generated filenames align with TMDB and media-library expectations.
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
|
||||||
|
- The system shall provide a CLI entrypoint named `ffx` with commands for `convert`, `inspect`, `shows`, `rename`, `unmux`, `cropdetect`, `setup`, `configure_workstation`, `upgrade`, `version`, and `help`.
|
||||||
|
- The system shall support a two-step local installation and preparation flow:
|
||||||
|
- `tools/setup.sh` is the bootstrap entrypoint for the first step and shall own bundle virtualenv creation, package installation, shell alias exposure, and optional Python test-package installation.
|
||||||
|
- `tools/configure_workstation.sh` is the bootstrap entrypoint for the second step and shall own workstation dependency checks and installation plus local config and directory seeding.
|
||||||
|
- After the bundle is installed, `ffx setup` and `ffx configure_workstation` shall remain aligned wrapper entrypoints for those same two steps.
|
||||||
|
- The CLI command `ffx setup` shall act as a wrapper for the first-step bundle-preparation flow in `tools/setup.sh`.
|
||||||
|
- The CLI command `ffx configure_workstation` shall act as a wrapper for the second-step preparation flow in `tools/configure_workstation.sh`.
|
||||||
|
- The system shall persist reusable normalization rules in SQLite for:
|
||||||
|
- shows and show formatting digits,
|
||||||
|
- optional show-level notes,
|
||||||
|
- optional show-level quality defaults,
|
||||||
|
- regex-based filename patterns,
|
||||||
|
- per-pattern media tags,
|
||||||
|
- per-pattern stream definitions,
|
||||||
|
- show-level and pattern-level shifted-season mappings,
|
||||||
|
- internal database version properties.
|
||||||
|
- The system shall apply supported ordered database migrations automatically when opening an older local database file and shall fail fast when no supported path exists.
|
||||||
|
- Before applying a required database migration, the system shall show the current version, target version, required sequential steps, and whether each corresponding migration module is present, then require user confirmation.
|
||||||
|
- Before applying a confirmed file-backed database migration, the system shall create an in-place backup copy whose filename includes the covered version range.
|
||||||
|
- Detailed show, pattern, and duplicate-match management rules live in `requirements/pattern_management.md`.
|
||||||
|
- The system shall inspect source media using `ffprobe` and derive a structured description of container metadata and streams.
|
||||||
|
- The system shall optionally open a Textual UI to browse shows, inspect files, and create, edit, or delete shows, patterns, stream definitions, tags, and shifted-season rules.
|
||||||
|
- The system shall match filenames against stored regex patterns to decide whether an input file should inherit a target stream and metadata schema.
|
||||||
|
- The system shall convert supported input files (`mkv`, `mp4`, `avi`, `flv`, `webm`) with `ffmpeg`, supporting at least:
|
||||||
|
- VP9, AV1, and H.264 video encoding,
|
||||||
|
- Opus audio encoding with bitrate selection based on channel layout,
|
||||||
|
- metadata and disposition rewriting,
|
||||||
|
- optional crop detection and crop application,
|
||||||
|
- optional deinterlacing and denoising,
|
||||||
|
- optional subtitle import from external files,
|
||||||
|
- rename-only move mode.
|
||||||
|
- The system shall support optional TMDB lookups to resolve show names, years, and episode titles when a show ID, season, and episode are available.
|
||||||
|
- The system shall generate output filenames from show metadata, season and episode indices, and episode names using the configured filename template.
|
||||||
|
- The system shall allow CLI overrides for stream languages, stream titles, default and forced tracks, stream order, TMDB show and episode data, output directory, label prefix, and processing resource limits.
|
||||||
|
- The system shall resolve encoding quality by precedence `CLI override -> pattern -> show -> encoder default` and shall report the chosen value and source.
|
||||||
|
- The system shall resolve season shifting by precedence `pattern -> show -> identity default` and shall report the chosen mapping and source.
|
||||||
|
- Processing resource limit rules:
|
||||||
|
- `--nice` shall accept niceness values from `-20` through `19`; omitting the option shall disable niceness adjustment.
|
||||||
|
- `--cpu` shall accept either a positive absolute `cpulimit` value such as `200`, or a percentage suffixed with `%` such as `25%` to represent a share of present CPUs; omitting the option or using `0` shall disable CPU limiting.
|
||||||
|
- When both limits are configured, the process wrapper shall execute the target command through `cpulimit` around a `nice -n ...` invocation so both limits apply to the launched media command.
|
||||||
|
- The system shall support extracting streams into separate files via `unmux` and reporting suggested crop parameters via `cropdetect`.
|
||||||
|
- The system shall support in-place episode renaming via `rename`, requiring a `--prefix`, accepting optional `--season` and `--suffix` overrides, preserving the source extension, and supporting dry-run output without moving files.
|
||||||
|
- Crop detection shall use a configurable sampling window, defaulting to a 60-second seek and a 180-second analysis duration, and repeated crop-detection requests for the same source plus sampling window shall reuse cached results within one process.
|
||||||
|
- The system shall handle invalid input and system failures gracefully by logging warnings or raising `click` errors for missing files, invalid media, missing TMDB credentials, incompatible database versions, and ambiguous track dispositions when prompting is disabled.
|
||||||
|
|
||||||
|
## Quality Requirements
|
||||||
|
|
||||||
|
- The system should stay understandable as a small local tool: controllers, descriptors, models, and screens should remain separate enough for contributors to trace a workflow end to end.
|
||||||
|
- The system should produce predictable output for the same database rules, CLI overrides, and source files.
|
||||||
|
- The system should preserve a lightweight operational footprint: local SQLite state, local log file, no mandatory background services.
|
||||||
|
- The system should be testable through modern automatically discovered tests and through remaining legacy harness coverage during migration.
|
||||||
|
- The system should expose enough logging to diagnose failed probes, failed conversions, and rule mismatches without requiring a debugger.
|
||||||
|
|
||||||
|
## Constraints And Assumptions
|
||||||
|
|
||||||
|
- Technology constraints:
|
||||||
|
- Python package built with setuptools.
|
||||||
|
- Primary libraries: `click`, `textual`, `sqlalchemy`, `jinja2`, `requests`.
|
||||||
|
- Conversion and inspection rely on external executables rather than pure-Python media libraries.
|
||||||
|
- Hosting or infrastructure constraints:
|
||||||
|
- Intended for local execution, not server deployment.
|
||||||
|
- Stores default state in `~/.local/etc/ffx.json`, `~/.local/var/ffx/ffx.db`, and `~/.local/var/log/ffx.log`.
|
||||||
|
- Timeline constraints:
|
||||||
|
- The current implemented scope reflects a compact alpha release stream up to version `0.2.4`.
|
||||||
|
- Team capacity assumptions:
|
||||||
|
- Maintained as a small codebase where simple patterns and direct controller logic are preferred over framework-heavy abstractions.
|
||||||
|
- Third-party dependencies:
|
||||||
|
- `ffmpeg`, `ffprobe`, and `cpulimit`.
|
||||||
|
- TMDB API access through `TMDB_API_KEY` for metadata enrichment.
|
||||||
|
- Installation assumptions:
|
||||||
|
- The Python-side bundle install step and optional Python test extras are managed by `tools/setup.sh`, with `ffx setup` as the aligned wrapper after bootstrap.
|
||||||
|
- The workstation-preparation step is managed separately by `tools/configure_workstation.sh` or `ffx configure_workstation`.
|
||||||
|
|
||||||
|
## Acceptance Scope
|
||||||
|
|
||||||
|
- First release boundary:
|
||||||
|
- Local installation through `pip`.
|
||||||
|
- Working SQLite-backed rule storage.
|
||||||
|
- Functional CLI conversion and inspection workflows.
|
||||||
|
- Textual CRUD flows for shows, patterns, tags, tracks, and shifted seasons.
|
||||||
|
- TMDB-assisted filename generation, subtitle import, season shifting, database versioning, and configurable output filename templating.
|
||||||
|
- Excluded follow-up ideas:
|
||||||
|
- Completing placeholder screens such as settings and help.
|
||||||
|
- Hardening platform portability beyond Linux-like systems.
|
||||||
|
- Broader media types, richer release packaging, and production-grade background processing.
|
||||||
|
- Demonstration scenario:
|
||||||
|
- Inspect a TV episode file, define or update the matching show and pattern in the TUI, then run `ffx convert` so the result uses the stored stream schema, optional TMDB episode naming, and a normalized output filename.
|
||||||
177
requirements/shifted_seasons_handling.md
Normal file
177
requirements/shifted_seasons_handling.md
Normal file
@@ -0,0 +1,177 @@
|
|||||||
|
# Shifted Seasons Handling
|
||||||
|
|
||||||
|
This file defines the behavioral contract for mapping source season and episode
|
||||||
|
numbering to target season and episode numbering through stored shifted-season
|
||||||
|
rules.
|
||||||
|
|
||||||
|
Primary sources:
|
||||||
|
- `requirements/project.md`
|
||||||
|
- `requirements/architecture.md`
|
||||||
|
- actual tool code in `src/ffx/`
|
||||||
|
|
||||||
|
Secondary source:
|
||||||
|
- `SCRATCHPAD.md`, used only to clarify current hardening gaps and not as the
|
||||||
|
primary contract source.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- Persisting shifted-season rules in SQLite.
|
||||||
|
- Allowing shifted-season rules to be attached either to a show or to a
|
||||||
|
specific pattern.
|
||||||
|
- Selecting at most one active shifted-season rule for one concrete source
|
||||||
|
season and episode tuple.
|
||||||
|
- Applying additive season and episode offsets to produce target numbering.
|
||||||
|
- Using shifted target numbering during `convert` for TMDB episode lookup and
|
||||||
|
generated season and episode filename tokens.
|
||||||
|
- Managing show-level default mappings and pattern-level override mappings from
|
||||||
|
the Textual editing workflows.
|
||||||
|
|
||||||
|
## Out Of Scope
|
||||||
|
|
||||||
|
- General filename parsing rules for detecting season and episode values.
|
||||||
|
- Standalone `rename` command behavior, which currently uses explicit rename
|
||||||
|
inputs rather than stored shifted-season rules.
|
||||||
|
- Stream or track mapping behavior unrelated to season and episode numbering.
|
||||||
|
|
||||||
|
## Terms
|
||||||
|
|
||||||
|
- `shifted-season rule`: one persisted row describing how one source-numbering
|
||||||
|
range maps to target numbering through additive offsets.
|
||||||
|
- `show-level shifted-season rule`: a rule attached directly to a show and used
|
||||||
|
as the fallback mapping layer for that show.
|
||||||
|
- `pattern-level shifted-season rule`: a rule attached directly to a pattern and
|
||||||
|
used as the override mapping layer for that pattern.
|
||||||
|
- `source numbering`: the season and episode values detected from the current
|
||||||
|
source file or supplied as source-side conversion inputs before shifting.
|
||||||
|
- `target numbering`: the season and episode values after one active
|
||||||
|
shifted-season rule has been applied.
|
||||||
|
- `original season`: the source-domain season number a shifted-season rule is
|
||||||
|
eligible to match.
|
||||||
|
- `episode range`: the optional source-domain episode interval covered by one
|
||||||
|
shifted-season rule.
|
||||||
|
- `open bound`: an unbounded start or end of the episode range. Current storage
|
||||||
|
uses `-1` as the internal sentinel for an open bound.
|
||||||
|
- `active shifted-season rule`: the single rule selected for one concrete input
|
||||||
|
after precedence resolution.
|
||||||
|
- `identity mapping`: the default `1:1` outcome where source numbering is used
|
||||||
|
unchanged.
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0001`: The domain model shall allow a
|
||||||
|
shifted-season rule to be owned by exactly one of:
|
||||||
|
- one show
|
||||||
|
- one pattern
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0002`: A single shifted-season rule shall not
|
||||||
|
belong to both a show and a pattern at the same time.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0003`: A shifted-season rule shall carry these
|
||||||
|
fields: `original_season`, `first_episode`, `last_episode`,
|
||||||
|
`season_offset`, and `episode_offset`.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0004`: `season_offset` and `episode_offset` shall
|
||||||
|
be additive signed integers applied to matched source numbering to produce
|
||||||
|
target numbering.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0005`: A shifted-season rule shall match a source
|
||||||
|
tuple only when:
|
||||||
|
- the source season equals `original_season`
|
||||||
|
- the source episode is greater than or equal to `first_episode` when the
|
||||||
|
lower bound is closed
|
||||||
|
- the source episode is less than or equal to `last_episode` when the upper
|
||||||
|
bound is closed
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0006`: An open lower or upper episode bound shall
|
||||||
|
represent an unbounded side of the covered source episode range.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0007`: If one shifted-season rule matches, target
|
||||||
|
numbering shall be:
|
||||||
|
- `target season = source season + season_offset`
|
||||||
|
- `target episode = source episode + episode_offset`
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0008`: If no shifted-season rule matches, source
|
||||||
|
numbering shall pass through unchanged.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0009`: Shifted-season handling shall operate in a
|
||||||
|
source-to-target numbering model. Stored rules map detected source numbering
|
||||||
|
to the target numbering used by conversion-facing metadata and output naming.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0010`: Pattern matching identifies the owning show
|
||||||
|
and optionally a more specific owning pattern. Resolution of the active
|
||||||
|
shifted-season rule shall use this precedence order:
|
||||||
|
- matching pattern-level rule
|
||||||
|
- matching show-level rule
|
||||||
|
- identity mapping
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0011`: At most one shifted-season rule may be
|
||||||
|
active for one concrete source season and episode tuple. Shifted-season rules
|
||||||
|
shall never stack or compose.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0012`: Within one owner scope, shifted-season rules
|
||||||
|
shall not overlap in their effective episode coverage for the same
|
||||||
|
`original_season`.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0013`: If a shifted-season rule uses two closed
|
||||||
|
episode bounds, `last_episode` shall be greater than or equal to
|
||||||
|
`first_episode`.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0014`: Shifted-season rule evaluation shall be
|
||||||
|
deterministic. Released behavior shall not depend on arbitrary database row
|
||||||
|
order when invalid overlapping rules exist.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0015`: A pattern-level rule is permitted to map to
|
||||||
|
zero offsets. Such a rule is a valid explicit override that beats show-level
|
||||||
|
fallback and produces identity mapping for its covered source range.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0016`: During `convert`, when show, season, and
|
||||||
|
episode values are available and stored shifting is active, the shifted target
|
||||||
|
numbering shall drive:
|
||||||
|
- TMDB episode lookup
|
||||||
|
- season and episode filename tokens such as `S01E02`
|
||||||
|
- generated episode basenames that include season and episode numbering
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0017`: When conversion is supplied explicit
|
||||||
|
target-domain season or episode values for TMDB naming, the system shall not
|
||||||
|
apply stored shifting on top of those already-targeted values.
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0018`: Operator-facing editing shall expose
|
||||||
|
shifted-season rule management in both of these places:
|
||||||
|
- show editing for show-level default mappings
|
||||||
|
- pattern editing for pattern-level override mappings
|
||||||
|
- `SHIFTED_SEASONS_HANDLING-0019`: User-facing shifted-season editing should
|
||||||
|
present open episode bounds as a natural empty-state input rather than forcing
|
||||||
|
operators to type the internal sentinel directly.
|
||||||
|
|
||||||
|
## Acceptance
|
||||||
|
|
||||||
|
- A show can exist with zero or more show-level shifted-season rules.
|
||||||
|
- A pattern can exist with zero or more pattern-level shifted-season rules.
|
||||||
|
- A shifted-season rule is stored against exactly one owner scope.
|
||||||
|
- A source tuple matching a pattern-level rule yields target numbering from that
|
||||||
|
rule even when a matching show-level rule also exists.
|
||||||
|
- A source tuple matching no pattern-level rule but matching a show-level rule
|
||||||
|
yields target numbering from the show-level rule.
|
||||||
|
- A source tuple matching neither scope yields identity mapping.
|
||||||
|
- A pattern-level zero-offset rule can explicitly override a nonzero show-level
|
||||||
|
rule for the same covered source range.
|
||||||
|
- Two shifted-season rules for the same owner scope and original season cannot
|
||||||
|
both be valid if they cover overlapping episode ranges.
|
||||||
|
- During `convert`, shifted numbering is what TMDB episode lookup and generated
|
||||||
|
season and episode tokens see when stored shifting is active.
|
||||||
|
- The TUI can display and maintain shifted-season rules from both the show and
|
||||||
|
pattern editing flows.
|
||||||
|
|
||||||
|
## Current Code Fit
|
||||||
|
|
||||||
|
- `src/ffx/model/show.py` and `src/ffx/model/pattern.py` now both expose
|
||||||
|
shifted-season relationships, and `src/ffx/model/shifted_season.py` stores
|
||||||
|
each rule against exactly one owner scope through `show_id` or `pattern_id`.
|
||||||
|
- `src/ffx/shifted_season_controller.py` now resolves mappings with
|
||||||
|
pattern-over-show precedence and applies at most one active rule for a source
|
||||||
|
tuple.
|
||||||
|
- `src/ffx/show_details_screen.py`,
|
||||||
|
`src/ffx/shifted_season_details_screen.py`, and
|
||||||
|
`src/ffx/shifted_season_delete_screen.py` provide reusable shifted-season
|
||||||
|
editing dialogs, and `src/ffx/pattern_details_screen.py` now exposes the
|
||||||
|
pattern-level override flow.
|
||||||
|
- `src/ffx/cli.py` now resolves shifted numbering during `convert` from:
|
||||||
|
pattern-level match, then show-level match, then identity mapping.
|
||||||
|
- `src/ffx/database.py` now migrates version-2 databases to version 3 by
|
||||||
|
preserving existing show-level rows and extending the schema for pattern-level
|
||||||
|
ownership.
|
||||||
|
|
||||||
|
## Risks
|
||||||
|
|
||||||
|
- The current CLI groups `--show`, `--season`, and `--episode` under one
|
||||||
|
override bucket used for TMDB-related behavior. Source-domain versus
|
||||||
|
target-domain semantics of each override must stay documented clearly so
|
||||||
|
stored shifting is neither skipped nor double-applied unexpectedly.
|
||||||
|
- Existing version-2 databases only contain show-owned shifted-season rows, so a
|
||||||
|
version-3 migration must preserve those rows as the show-level fallback layer.
|
||||||
|
- Current modern automated test coverage for shifted-season behavior is light,
|
||||||
|
so precedence, migration, and convert-time numbering behavior need focused
|
||||||
|
tests.
|
||||||
74
requirements/subtrack_mapping.md
Normal file
74
requirements/subtrack_mapping.md
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
# Subtrack Mapping
|
||||||
|
|
||||||
|
This file defines the behavioral contract for mapping input subtracks to output
|
||||||
|
subtracks during conversion.
|
||||||
|
|
||||||
|
Primary source: actual tool code in `src/ffx/`.
|
||||||
|
Secondary source: `tests/legacy/`, used only to clarify intent and reveal gaps.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- Ensuring each target subtrack is created from the corresponding source-subtrack information, including stream-level metadata.
|
||||||
|
- Mapping input streams to output streams during conversion.
|
||||||
|
- Using persisted pattern-track definitions from the database as the target schema.
|
||||||
|
- Allowing omission and reordering of retained tracks.
|
||||||
|
- Keeping stream-level metadata attached to the correct source-derived logical track after remapping.
|
||||||
|
- Normalizing target output into ordered track groups: video, audio, subtitle, then special types such as fonts or images.
|
||||||
|
|
||||||
|
## Terms
|
||||||
|
|
||||||
|
- `source_index`: identity of the originating input stream from ffprobe or an imported source descriptor.
|
||||||
|
- `index`: final output-track order across all retained tracks.
|
||||||
|
- `sub_index`: per-type position within the retained tracks of one type, for example audio stream `0` or subtitle stream `1`.
|
||||||
|
- `target schema`: stored or constructed output-track definition that decides which tracks are kept, omitted, reordered, and rewritten.
|
||||||
|
- `separate source file`: additional file bound to one target track slot whose media payload replaces the regular source payload for that slot.
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
- `SUBTRACK_MAPPING-0001`: The system shall represent source-stream identity separately from output order. `source_index`, `index`, and `sub_index` are distinct concepts and shall not be collapsed into one field.
|
||||||
|
- `SUBTRACK_MAPPING-0002`: The system shall derive `source_index` for probed tracks from the original ffprobe stream index and preserve that identity through conversion planning.
|
||||||
|
- `SUBTRACK_MAPPING-0003`: Pattern-backed track definitions stored in the database shall persist both target output order and originating source-stream identity.
|
||||||
|
- `SUBTRACK_MAPPING-0004`: When a filename matches a pattern, the pattern target schema shall be the source of truth for which source tracks are retained, which are omitted, and in what order retained tracks appear in the output.
|
||||||
|
- `SUBTRACK_MAPPING-0005`: A target track may refer only to an existing source track of the same type. Conversion shall fail fast when a target track refers to a nonexistent source stream or a source stream of a different type.
|
||||||
|
- `SUBTRACK_MAPPING-0006`: The ffmpeg mapping phase shall be generated from target output order while resolving each retained output track back to its originating source stream via `source_index`.
|
||||||
|
- `SUBTRACK_MAPPING-0007`: Reordering and omission shall preserve logical track identity. Stream-level metadata, titles, languages, and disposition decisions shall stay attached to the correct source-derived logical track after mapping.
|
||||||
|
- `SUBTRACK_MAPPING-0008`: The system shall support one-off CLI stream-order overrides without requiring prior database edits.
|
||||||
|
- `SUBTRACK_MAPPING-0009`: Operator-facing inspection and editing surfaces shall expose enough source-versus-target information to let a user reason about subtrack mapping decisions.
|
||||||
|
- `SUBTRACK_MAPPING-0010`: Test coverage for subtrack mapping shall assert source-derived identity, omission, and output order explicitly. Final track counts or final type sequences alone are insufficient proof of correct mapping.
|
||||||
|
- `SUBTRACK_MAPPING-0011`: Retained target tracks shall appear in ordered groups: video track or tracks first, then audio tracks, then subtitle tracks, then special types such as fonts or images. Within each group, the target schema shall define the order.
|
||||||
|
- `SUBTRACK_MAPPING-0012`: Track omission is valid when required by output compatibility, when needed to normalize source tracks into the required target group order and schema, or when explicitly requested by database rules or CLI options.
|
||||||
|
- `SUBTRACK_MAPPING-0013`: If source tracks do not already comply with the required target group order, conversion shall reorder retained tracks to match the target ordering contract without losing source-track identity or stream-level metadata lineage.
|
||||||
|
|
||||||
|
## Separate Additional Source Files
|
||||||
|
|
||||||
|
- `SUBTRACK_MAPPING-0014`: A separate source file may substitute the media payload of one target subtrack without changing that target track's intended output position.
|
||||||
|
- `SUBTRACK_MAPPING-0015`: When a separate source file is used, the target track shall remain bound to the corresponding logical source track for mapping, validation, and metadata lineage.
|
||||||
|
- `SUBTRACK_MAPPING-0016`: Metadata for a substituted target track shall be merged from the regular source track and the separate source file when available.
|
||||||
|
- `SUBTRACK_MAPPING-0017`: If the separate source file provides a metadata field that is also present on the regular source track, the separate source file value shall win in the target output.
|
||||||
|
- `SUBTRACK_MAPPING-0018`: If a metadata field is absent from the separate source file, the system shall fall back to the corresponding metadata from the regular source track or target schema rewrite rules.
|
||||||
|
|
||||||
|
## Acceptance
|
||||||
|
|
||||||
|
- Given a source media descriptor and a pattern-backed target schema, the planned output tracks can be listed in final output order and each retained track can still be traced to one originating source stream.
|
||||||
|
- Planned output order follows grouped target order: video, audio, subtitle, then special types.
|
||||||
|
- Tracks not referenced by the target schema are omitted from output mapping.
|
||||||
|
- Tracks may also be omitted when they are incompatible with the chosen output format or explicitly excluded by database or CLI rules.
|
||||||
|
- Two retained target tracks never originate from the same source stream unless duplication is implemented explicitly as a separate feature.
|
||||||
|
- If target-track metadata is rewritten after reordering, it is written onto the correct source-derived logical track rather than the track that merely occupies the same final output position.
|
||||||
|
- Invalid target-to-source references fail deterministically before the conversion job is launched.
|
||||||
|
- If a separate source file substitutes one target track, that track keeps its target slot and ordering while metadata is merged with separate-file values taking precedence when both sides provide the same field.
|
||||||
|
- A test proving subtrack mapping must assert at least one of: exact `source_index` to output-order mapping, omission of named source tracks, or preservation of per-track metadata after reorder.
|
||||||
|
|
||||||
|
## Test Notes
|
||||||
|
|
||||||
|
- `tests/legacy/scenario.py` names pattern behavior as `Filter/Reorder Tracks`.
|
||||||
|
- `tests/legacy/scenario_4.py` is the strongest end-to-end signal because it runs DB-backed conversion and reapplies source indices before assertion.
|
||||||
|
- `tests/legacy/track_tag_combinator_2_0.py` and `tests/legacy/track_tag_combinator_3_4.py` sort result tracks by `source_index` before checking tags, which matches the intended identity model.
|
||||||
|
- Legacy permutation combinators define permutations but their assertion functions are stubs.
|
||||||
|
- Some legacy scenarios produce `AP` and `SP` selectors but do not execute them.
|
||||||
|
|
||||||
|
## Risks
|
||||||
|
|
||||||
|
- `src/ffx/media_descriptor.py` contains an explicit `rearrangeTrackDescriptors()` path whose current implementation appears defective and under-tested.
|
||||||
|
- Separate-source-file metadata precedence is only partly expressed in current implementation paths and should be covered directly in the rewritten test suite.
|
||||||
|
- Production code expresses the mapping contract more clearly than the legacy harness, so a rewrite should add direct logic-level tests for mapping and reorder planning.
|
||||||
144
requirements/tests.md
Normal file
144
requirements/tests.md
Normal file
@@ -0,0 +1,144 @@
|
|||||||
|
# Test Rewrite
|
||||||
|
|
||||||
|
This file captures the structure executed by `tests/legacy_runner.py` today and
|
||||||
|
defines the target shape for a complete rewrite.
|
||||||
|
|
||||||
|
Detailed product rules for source-to-target subtrack mapping live in
|
||||||
|
`requirements/subtrack_mapping.md`. This file describes only how tests cover
|
||||||
|
that area.
|
||||||
|
|
||||||
|
## Interpreter Requirement
|
||||||
|
|
||||||
|
- Agents shall run Python-side test commands with `~/.local/share/ffx.venv/bin/python`.
|
||||||
|
- This applies to the legacy harness, `unittest`, `pytest`, helper scripts, and `python -m ffx ...` test invocations.
|
||||||
|
- Agents shall not silently substitute `python`, `python3`, or another interpreter for Python-side test work.
|
||||||
|
- If `~/.local/share/ffx.venv/bin/python` is missing or not executable, agents shall stop and report the missing venv instead of continuing with Python-side test execution.
|
||||||
|
|
||||||
|
## Shell Environment Requirement
|
||||||
|
|
||||||
|
- Agents shall source `~/.bashrc` from an interactive Bash shell before running TMDB-dependent test commands or TMDB-dependent `python -m ffx ...` test invocations.
|
||||||
|
- Agents shall not source `~/.bashrc.d/interactive/77_tmdb.sh` directly for normal test work; `~/.bashrc` is the required entry point.
|
||||||
|
- In automation this means agents shall use an interactive Bash invocation such as `bash -ic 'source ~/.bashrc && ...'`, because a non-interactive `bash -lc` returns from `~/.bashrc` before the interactive fragments are loaded.
|
||||||
|
- If sourcing `~/.bashrc` still does not provide required shell environment such as `TMDB_API_KEY`, agents shall stop and report the missing environment instead of continuing with TMDB-dependent test execution.
|
||||||
|
|
||||||
|
## Current Harness
|
||||||
|
|
||||||
|
- Entrypoint: `~/.local/share/ffx.venv/bin/python tests/legacy_runner.py run`
|
||||||
|
- Runner style: custom Click CLI, not `pytest` or `unittest`
|
||||||
|
- Commands:
|
||||||
|
- `run`: discover scenario files, instantiate each scenario, run yielded jobs
|
||||||
|
- `dupe`: helper command that creates duplicate media fixtures; not part of the test run
|
||||||
|
- Filters: `--scenario`, `--variant`, `--limit`
|
||||||
|
- Shared context:
|
||||||
|
- builds one mutable dict for the whole run
|
||||||
|
- installs loggers and writes `ffx_test_report.log`
|
||||||
|
- creates `ConfigurationController` eagerly
|
||||||
|
- tracks only passed and failed counters
|
||||||
|
- Discovery:
|
||||||
|
- scenario files: `tests/legacy/scenario_*.py`
|
||||||
|
- combinators: `glob + importlib + inspect` by filename convention
|
||||||
|
- ordering: implicit glob order, no explicit sorting
|
||||||
|
- Skip behavior:
|
||||||
|
- Scenario 4 is skipped when `TMDB_API_KEY` is missing
|
||||||
|
- only `TMDB_API_KEY_NOT_PRESENT_EXCEPTION` is caught at scenario construction time
|
||||||
|
|
||||||
|
## Current Scenarios
|
||||||
|
|
||||||
|
- `1`: `tests/legacy/scenario_1.py`
|
||||||
|
- focus: basename generation without pattern lookup or TMDB
|
||||||
|
- inputs per job: `1`
|
||||||
|
- jobs: `140`
|
||||||
|
- expected failures: `0`
|
||||||
|
- execution: build one synthetic source file, run `~/.local/share/ffx.venv/bin/python -m ffx convert`, assert filename selectors only
|
||||||
|
- selectors executed: `B`, `L`, `I`
|
||||||
|
- selectors defined but not executed: `S`, `R`
|
||||||
|
- `2`: `tests/legacy/scenario_2.py`
|
||||||
|
- focus: conversion matrix over media layouts, dispositions, tags, and permutations
|
||||||
|
- inputs per job: `1`
|
||||||
|
- jobs: `8193`
|
||||||
|
- expected failures: `3267`
|
||||||
|
- execution: build one synthetic source file, run `~/.local/share/ffx.venv/bin/python -m ffx convert`, probe result with `FileProperties`, assert track layout and selected audio and subtitle metadata
|
||||||
|
- selectors executed: `M`, `AD`, `AT`, `SD`, `ST`
|
||||||
|
- selectors defined but not executed: `MT`, `AP`, `SP`, `J`
|
||||||
|
- `4`: `tests/legacy/scenario_4.py`
|
||||||
|
- focus: pattern-driven batch conversion with SQLite state and live TMDB naming
|
||||||
|
- inputs per job: `6`
|
||||||
|
- jobs: `768`
|
||||||
|
- expected failures: `336`
|
||||||
|
- execution: build six synthetic preset files, recreate temp SQLite DB, insert show and pattern, run one batch convert command via `~/.local/share/ffx.venv/bin/python`, query TMDB during assertions
|
||||||
|
- selectors executed: `M`, `AD`, `AT`, `SD`, `ST`
|
||||||
|
- selectors defined but not executed: `MT`, `AP`, `SP`, `J`
|
||||||
|
- notes:
|
||||||
|
- uses `MediaCombinator6` only
|
||||||
|
- issues live HTTP requests through `TmdbController` with no request cache
|
||||||
|
|
||||||
|
## Current Combinator Families
|
||||||
|
|
||||||
|
- scenario files discovered: `3`
|
||||||
|
- basename combinators discovered: `2`
|
||||||
|
- media combinators discovered: `8`
|
||||||
|
- media tag combinators discovered: `3`
|
||||||
|
- disposition combinator 2 variants: `4`
|
||||||
|
- disposition combinator 3 variants: `5`
|
||||||
|
- track tag combinator 2 variants: `4`
|
||||||
|
- track tag combinator 3 variants: `5`
|
||||||
|
- indicator variants: `7`
|
||||||
|
- label variants: `2`
|
||||||
|
- show variants: `3`
|
||||||
|
- release variants: `3`
|
||||||
|
- permutation 2 variants: `2`
|
||||||
|
- permutation 3 variants: `3`
|
||||||
|
|
||||||
|
## Current Totals
|
||||||
|
|
||||||
|
- full run without TMDB: `8333`
|
||||||
|
- full run with TMDB: `9101`
|
||||||
|
- Scenario 4 generated source files: `4608`
|
||||||
|
- Scenario 4 live TMDB episode queries: `4608`
|
||||||
|
|
||||||
|
## Current Behavior Areas
|
||||||
|
|
||||||
|
- output basename rules for label, season and episode indicator, show name, and release suffix combinations
|
||||||
|
- track layout normalization across the eight media combinator shapes from `VA` through `VAASSS`
|
||||||
|
- two-track and three-track disposition edge cases, including intentional failure cases
|
||||||
|
- two-track and three-track track-tag preservation checks, including checks that sort results by source identity
|
||||||
|
- container-level media tag handling
|
||||||
|
- pattern-backed conversion against a temporary SQLite database
|
||||||
|
- TMDB-assisted episode naming for batch conversion
|
||||||
|
|
||||||
|
## Structural Findings
|
||||||
|
|
||||||
|
- The suite is process-heavy: most jobs run `ffmpeg` to generate a fixture and then spawn the FFX CLI as a subprocess.
|
||||||
|
- The suite is integration-first and has almost no isolated unit-level coverage for pure logic.
|
||||||
|
- The base `Combinator` class is a placeholder and is not the real abstraction boundary used by the suite.
|
||||||
|
- Many combinator methods are placeholders: there are `25` `pass` statements across the current test modules.
|
||||||
|
- Several assertion families are never executed because scenario selector dispatch is incomplete.
|
||||||
|
- Scenario comments mention a Scenario 3, but no `scenario_3.py` exists.
|
||||||
|
- `tests/legacy/_basename_combinator_1.py` is effectively orphaned because discovery only matches `basename_combinator_*.py`.
|
||||||
|
- `tests/legacy/disposition_combinator_2_3 .py` contains an embedded space in the filename and is still part of discovery.
|
||||||
|
- Expected failures are validated only as subprocess return-code matches, not as specific error types or messages.
|
||||||
|
- The current suite depends on `ffmpeg`, `ffprobe`, SQLite, the local Python environment, and for Scenario 4 a live TMDB API key plus network access.
|
||||||
|
|
||||||
|
## Rewrite Target
|
||||||
|
|
||||||
|
- Replace the custom Click harness with a standard test runner, preferably `pytest`.
|
||||||
|
- Split the suite into explicit layers: unit, integration, and optional external-system tests.
|
||||||
|
- Keep unit tests as the default path and make them runnable without `ffmpeg`, `ffprobe`, TMDB, or a user config directory.
|
||||||
|
- Model discovery explicitly in code instead of relying on glob-plus-reflection naming conventions.
|
||||||
|
- Convert the current Cartesian-product combinators into readable parametrized cases grouped by behavior area.
|
||||||
|
- Preserve the current behavior areas, but represent them with targeted cases instead of thousands of opaque variant IDs.
|
||||||
|
- Make every assertion family explicit and executable; there must be no selector that is produced but never consumed.
|
||||||
|
- Replace live TMDB access with fixtures or mocks in normal runs; any live-contract test must be opt-in.
|
||||||
|
- Replace ad hoc subprocess return-code checks with assertions on typed exceptions, stderr content, or structured outputs.
|
||||||
|
- Provide small reusable media fixtures or fixture builders so only a narrow integration slice needs `ffmpeg`-generated media.
|
||||||
|
- Make database tests self-contained and fast through temporary databases and direct controller-level assertions.
|
||||||
|
- Make ordering, naming, and selection deterministic so a contributor can predict exactly what will run.
|
||||||
|
- Expose a small smoke suite for quick local runs and CI, plus a separately marked slower integration suite.
|
||||||
|
- Prefer domain-oriented test modules over combinator-family modules: basename, pattern matching, metadata rewrite, track ordering, TMDB naming, CLI smoke, and failure handling.
|
||||||
|
|
||||||
|
## Rewrite Acceptance
|
||||||
|
|
||||||
|
- A default local test run finishes quickly and without network access.
|
||||||
|
- A contributor can identify which behavior a failing test covers without decoding variant strings like `VAASSS-A:D10-S:T001`.
|
||||||
|
- All current intended failure behaviors remain covered, but each one is asserted directly and readably.
|
||||||
|
- The rewritten suite can be adopted by CI without requiring live TMDB credentials.
|
||||||
9
src/ffx/__main__.py
Normal file
9
src/ffx/__main__.py
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
from .cli import ffx
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
ffx()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
@@ -30,6 +30,15 @@ class AudioLayout(Enum):
|
|||||||
except:
|
except:
|
||||||
return AudioLayout.LAYOUT_UNDEFINED
|
return AudioLayout.LAYOUT_UNDEFINED
|
||||||
|
|
||||||
|
# @staticmethod
|
||||||
|
# def fromIndex(index : int):
|
||||||
|
# try:
|
||||||
|
# target_index = int(index)
|
||||||
|
# except (TypeError, ValueError):
|
||||||
|
# return AudioLayout.LAYOUT_UNDEFINED
|
||||||
|
# return next((a for a in AudioLayout if a.value['index'] == target_index),
|
||||||
|
# AudioLayout.LAYOUT_UNDEFINED)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def fromIndex(index : int):
|
def fromIndex(index : int):
|
||||||
try:
|
try:
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -1,5 +1,12 @@
|
|||||||
import os, json
|
import os, json
|
||||||
|
|
||||||
|
from .constants import (
|
||||||
|
DEFAULT_SHOW_INDEX_EPISODE_DIGITS,
|
||||||
|
DEFAULT_SHOW_INDEX_SEASON_DIGITS,
|
||||||
|
DEFAULT_SHOW_INDICATOR_EPISODE_DIGITS,
|
||||||
|
DEFAULT_SHOW_INDICATOR_SEASON_DIGITS,
|
||||||
|
)
|
||||||
|
|
||||||
class ConfigurationController():
|
class ConfigurationController():
|
||||||
|
|
||||||
CONFIG_FILENAME = 'ffx.json'
|
CONFIG_FILENAME = 'ffx.json'
|
||||||
@@ -8,7 +15,12 @@ class ConfigurationController():
|
|||||||
|
|
||||||
DATABASE_PATH_CONFIG_KEY = 'databasePath'
|
DATABASE_PATH_CONFIG_KEY = 'databasePath'
|
||||||
LOG_DIRECTORY_CONFIG_KEY = 'logDirectory'
|
LOG_DIRECTORY_CONFIG_KEY = 'logDirectory'
|
||||||
|
SUBTITLES_DIRECTORY_CONFIG_KEY = 'subtitlesDirectory'
|
||||||
OUTPUT_FILENAME_TEMPLATE_KEY = 'outputFilenameTemplate'
|
OUTPUT_FILENAME_TEMPLATE_KEY = 'outputFilenameTemplate'
|
||||||
|
DEFAULT_INDEX_SEASON_DIGITS_CONFIG_KEY = 'defaultIndexSeasonDigits'
|
||||||
|
DEFAULT_INDEX_EPISODE_DIGITS_CONFIG_KEY = 'defaultIndexEpisodeDigits'
|
||||||
|
DEFAULT_INDICATOR_SEASON_DIGITS_CONFIG_KEY = 'defaultIndicatorSeasonDigits'
|
||||||
|
DEFAULT_INDICATOR_EPISODE_DIGITS_CONFIG_KEY = 'defaultIndicatorEpisodeDigits'
|
||||||
|
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
@@ -49,6 +61,48 @@ class ConfigurationController():
|
|||||||
def getDatabaseFilePath(self):
|
def getDatabaseFilePath(self):
|
||||||
return self.__databaseFilePath
|
return self.__databaseFilePath
|
||||||
|
|
||||||
|
def getSubtitlesDirectoryPath(self):
|
||||||
|
subtitlesDirectory = self.__configurationData.get(
|
||||||
|
ConfigurationController.SUBTITLES_DIRECTORY_CONFIG_KEY,
|
||||||
|
'',
|
||||||
|
)
|
||||||
|
return os.path.expanduser(str(subtitlesDirectory)) if subtitlesDirectory else ''
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def getConfiguredIntegerValue(cls, configurationData: dict, configKey: str, defaultValue: int) -> int:
|
||||||
|
configuredValue = configurationData.get(configKey, defaultValue)
|
||||||
|
try:
|
||||||
|
return int(configuredValue)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
return int(defaultValue)
|
||||||
|
|
||||||
|
def getDefaultIndexSeasonDigits(self):
|
||||||
|
return ConfigurationController.getConfiguredIntegerValue(
|
||||||
|
self.__configurationData,
|
||||||
|
ConfigurationController.DEFAULT_INDEX_SEASON_DIGITS_CONFIG_KEY,
|
||||||
|
DEFAULT_SHOW_INDEX_SEASON_DIGITS,
|
||||||
|
)
|
||||||
|
|
||||||
|
def getDefaultIndexEpisodeDigits(self):
|
||||||
|
return ConfigurationController.getConfiguredIntegerValue(
|
||||||
|
self.__configurationData,
|
||||||
|
ConfigurationController.DEFAULT_INDEX_EPISODE_DIGITS_CONFIG_KEY,
|
||||||
|
DEFAULT_SHOW_INDEX_EPISODE_DIGITS,
|
||||||
|
)
|
||||||
|
|
||||||
|
def getDefaultIndicatorSeasonDigits(self):
|
||||||
|
return ConfigurationController.getConfiguredIntegerValue(
|
||||||
|
self.__configurationData,
|
||||||
|
ConfigurationController.DEFAULT_INDICATOR_SEASON_DIGITS_CONFIG_KEY,
|
||||||
|
DEFAULT_SHOW_INDICATOR_SEASON_DIGITS,
|
||||||
|
)
|
||||||
|
|
||||||
|
def getDefaultIndicatorEpisodeDigits(self):
|
||||||
|
return ConfigurationController.getConfiguredIntegerValue(
|
||||||
|
self.__configurationData,
|
||||||
|
ConfigurationController.DEFAULT_INDICATOR_EPISODE_DIGITS_CONFIG_KEY,
|
||||||
|
DEFAULT_SHOW_INDICATOR_EPISODE_DIGITS,
|
||||||
|
)
|
||||||
|
|
||||||
def getData(self):
|
def getData(self):
|
||||||
return self.__configurationData
|
return self.__configurationData
|
||||||
|
|||||||
@@ -1,15 +1,30 @@
|
|||||||
VERSION='0.2.3'
|
VERSION='0.2.4'
|
||||||
DATABASE_VERSION = 2
|
DATABASE_VERSION = 3
|
||||||
|
|
||||||
DEFAULT_QUALITY = 32
|
DEFAULT_QUALITY = 32
|
||||||
DEFAULT_AV1_PRESET = 5
|
DEFAULT_AV1_PRESET = 5
|
||||||
|
|
||||||
|
DEFAULT_VIDEO_ENCODER_LABEL = "vp9"
|
||||||
|
DEFAULT_CONTAINER_FORMAT = "webm"
|
||||||
|
DEFAULT_CONTAINER_EXTENSION = "webm"
|
||||||
|
SUPPORTED_INPUT_FILE_EXTENSIONS = ("mkv", "mp4", "avi", "flv", "webm")
|
||||||
|
FFMPEG_COMMAND_TOKENS = ("ffmpeg", "-y")
|
||||||
|
FFMPEG_NULL_OUTPUT_TOKENS = ("-f", "null", "/dev/null")
|
||||||
|
|
||||||
DEFAULT_STEREO_BANDWIDTH = "112"
|
DEFAULT_STEREO_BANDWIDTH = "112"
|
||||||
DEFAULT_AC3_BANDWIDTH = "256"
|
DEFAULT_AC3_BANDWIDTH = "256"
|
||||||
DEFAULT_DTS_BANDWIDTH = "320"
|
DEFAULT_DTS_BANDWIDTH = "320"
|
||||||
DEFAULT_7_1_BANDWIDTH = "384"
|
DEFAULT_7_1_BANDWIDTH = "384"
|
||||||
|
|
||||||
|
DEFAULT_CROPDETECT_SEEK_SECONDS = 60
|
||||||
|
DEFAULT_CROPDETECT_DURATION_SECONDS = 180
|
||||||
|
|
||||||
DEFAULT_cut_start = 60
|
DEFAULT_cut_start = 60
|
||||||
DEFAULT_cut_length = 180
|
DEFAULT_cut_length = 180
|
||||||
|
|
||||||
|
DEFAULT_SHOW_INDEX_SEASON_DIGITS = 2
|
||||||
|
DEFAULT_SHOW_INDEX_EPISODE_DIGITS = 2
|
||||||
|
DEFAULT_SHOW_INDICATOR_SEASON_DIGITS = 2
|
||||||
|
DEFAULT_SHOW_INDICATOR_EPISODE_DIGITS = 2
|
||||||
|
|
||||||
DEFAULT_OUTPUT_FILENAME_TEMPLATE = '{{ ffx_show_name }} - {{ ffx_index }}{{ ffx_index_separator }}{{ ffx_episode_name }}{{ ffx_indicator_separator }}{{ ffx_indicator }}'
|
DEFAULT_OUTPUT_FILENAME_TEMPLATE = '{{ ffx_show_name }} - {{ ffx_index }}{{ ffx_index_separator }}{{ ffx_episode_name }}{{ ffx_indicator_separator }}{{ ffx_indicator }}'
|
||||||
|
|||||||
@@ -1,20 +1,25 @@
|
|||||||
import os, click
|
import os, shutil, click
|
||||||
|
|
||||||
from sqlalchemy import create_engine
|
from sqlalchemy import create_engine, inspect, text
|
||||||
from sqlalchemy.orm import sessionmaker
|
from sqlalchemy.orm import sessionmaker
|
||||||
|
|
||||||
|
# Import the full model package so SQLAlchemy registers every mapped class
|
||||||
|
# before metadata creation and the first ORM query.
|
||||||
|
import ffx.model
|
||||||
from ffx.model.show import Base
|
from ffx.model.show import Base
|
||||||
|
|
||||||
from ffx.model.property import Property
|
from ffx.model.property import Property
|
||||||
|
from ffx.model.migration import (
|
||||||
|
DatabaseVersionException,
|
||||||
|
getMigrationPlan,
|
||||||
|
migrateDatabase,
|
||||||
|
)
|
||||||
|
|
||||||
from ffx.constants import DATABASE_VERSION
|
from ffx.constants import DATABASE_VERSION
|
||||||
|
|
||||||
|
|
||||||
DATABASE_VERSION_KEY = 'database_version'
|
DATABASE_VERSION_KEY = 'database_version'
|
||||||
|
EXPECTED_TABLE_NAMES = set(Base.metadata.tables.keys())
|
||||||
class DatabaseVersionException(Exception):
|
|
||||||
def __init__(self, errorMessage):
|
|
||||||
super().__init__(errorMessage)
|
|
||||||
|
|
||||||
def databaseContext(databasePath: str = ''):
|
def databaseContext(databasePath: str = ''):
|
||||||
|
|
||||||
@@ -29,12 +34,18 @@ def databaseContext(databasePath: str = ''):
|
|||||||
if not os.path.exists(ffxVarDir):
|
if not os.path.exists(ffxVarDir):
|
||||||
os.makedirs(ffxVarDir)
|
os.makedirs(ffxVarDir)
|
||||||
databasePath = os.path.join(ffxVarDir, 'ffx.db')
|
databasePath = os.path.join(ffxVarDir, 'ffx.db')
|
||||||
|
else:
|
||||||
|
databasePath = os.path.expanduser(databasePath)
|
||||||
|
|
||||||
|
if databasePath != ':memory:':
|
||||||
|
databasePath = os.path.abspath(databasePath)
|
||||||
|
|
||||||
|
databaseContext['path'] = databasePath
|
||||||
databaseContext['url'] = f"sqlite:///{databasePath}"
|
databaseContext['url'] = f"sqlite:///{databasePath}"
|
||||||
databaseContext['engine'] = create_engine(databaseContext['url'])
|
databaseContext['engine'] = create_engine(databaseContext['url'])
|
||||||
databaseContext['session'] = sessionmaker(bind=databaseContext['engine'])
|
databaseContext['session'] = sessionmaker(bind=databaseContext['engine'])
|
||||||
|
|
||||||
Base.metadata.create_all(databaseContext['engine'])
|
bootstrapDatabaseIfNeeded(databaseContext)
|
||||||
|
|
||||||
# isSyncronuous = False
|
# isSyncronuous = False
|
||||||
# while not isSyncronuous:
|
# while not isSyncronuous:
|
||||||
@@ -51,14 +62,126 @@ def databaseContext(databasePath: str = ''):
|
|||||||
|
|
||||||
return databaseContext
|
return databaseContext
|
||||||
|
|
||||||
|
|
||||||
|
def databaseNeedsBootstrap(databaseContext) -> bool:
|
||||||
|
inspector = inspect(databaseContext['engine'])
|
||||||
|
existingTableNames = set(inspector.get_table_names())
|
||||||
|
return not EXPECTED_TABLE_NAMES.issubset(existingTableNames)
|
||||||
|
|
||||||
|
|
||||||
|
def bootstrapDatabaseIfNeeded(databaseContext):
|
||||||
|
if not databaseNeedsBootstrap(databaseContext):
|
||||||
|
return
|
||||||
|
|
||||||
|
Base.metadata.create_all(databaseContext['engine'])
|
||||||
|
|
||||||
|
|
||||||
def ensureDatabaseVersion(databaseContext):
|
def ensureDatabaseVersion(databaseContext):
|
||||||
|
|
||||||
currentDatabaseVersion = getDatabaseVersion(databaseContext)
|
currentDatabaseVersion = getDatabaseVersion(databaseContext)
|
||||||
if currentDatabaseVersion:
|
if not currentDatabaseVersion:
|
||||||
if currentDatabaseVersion != DATABASE_VERSION:
|
|
||||||
raise DatabaseVersionException(f"Current database version ({currentDatabaseVersion}) does not match required ({DATABASE_VERSION})")
|
|
||||||
else:
|
|
||||||
setDatabaseVersion(databaseContext, DATABASE_VERSION)
|
setDatabaseVersion(databaseContext, DATABASE_VERSION)
|
||||||
|
return
|
||||||
|
|
||||||
|
if currentDatabaseVersion > DATABASE_VERSION:
|
||||||
|
raise DatabaseVersionException(
|
||||||
|
f"Current database version ({currentDatabaseVersion}) does not match required ({DATABASE_VERSION})"
|
||||||
|
)
|
||||||
|
|
||||||
|
if currentDatabaseVersion < DATABASE_VERSION:
|
||||||
|
promptForDatabaseMigration(databaseContext, currentDatabaseVersion, DATABASE_VERSION)
|
||||||
|
migrateDatabase(databaseContext, currentDatabaseVersion, DATABASE_VERSION, setDatabaseVersion)
|
||||||
|
currentDatabaseVersion = getDatabaseVersion(databaseContext)
|
||||||
|
|
||||||
|
if currentDatabaseVersion != DATABASE_VERSION:
|
||||||
|
raise DatabaseVersionException(
|
||||||
|
f"Current database version ({currentDatabaseVersion}) does not match required ({DATABASE_VERSION})"
|
||||||
|
)
|
||||||
|
|
||||||
|
ensureCurrentSchemaCompatibility(databaseContext)
|
||||||
|
|
||||||
|
|
||||||
|
def ensureCurrentSchemaCompatibility(databaseContext):
|
||||||
|
engine = databaseContext['engine']
|
||||||
|
inspector = inspect(engine)
|
||||||
|
showColumns = {
|
||||||
|
column['name']
|
||||||
|
for column in inspector.get_columns('shows')
|
||||||
|
}
|
||||||
|
|
||||||
|
alterStatements = []
|
||||||
|
if 'quality' not in showColumns:
|
||||||
|
alterStatements.append("ALTER TABLE shows ADD COLUMN quality INTEGER DEFAULT 0")
|
||||||
|
if 'notes' not in showColumns:
|
||||||
|
alterStatements.append("ALTER TABLE shows ADD COLUMN notes TEXT DEFAULT ''")
|
||||||
|
|
||||||
|
if not alterStatements:
|
||||||
|
return
|
||||||
|
|
||||||
|
with engine.begin() as connection:
|
||||||
|
for alterStatement in alterStatements:
|
||||||
|
connection.execute(text(alterStatement))
|
||||||
|
|
||||||
|
|
||||||
|
def promptForDatabaseMigration(databaseContext, currentDatabaseVersion: int, targetDatabaseVersion: int):
|
||||||
|
migrationPlan = getMigrationPlan(currentDatabaseVersion, targetDatabaseVersion)
|
||||||
|
|
||||||
|
click.echo("Database migration required.")
|
||||||
|
click.echo(f"Current version: {currentDatabaseVersion}")
|
||||||
|
click.echo(f"Target version: {targetDatabaseVersion}")
|
||||||
|
click.echo("Steps required:")
|
||||||
|
|
||||||
|
missingSteps = []
|
||||||
|
for migrationStep in migrationPlan:
|
||||||
|
moduleStatus = "present" if migrationStep.modulePresent else "missing"
|
||||||
|
click.echo(
|
||||||
|
f" {migrationStep.versionFrom} -> {migrationStep.versionTo}: "
|
||||||
|
+ f"{migrationStep.moduleName} [{moduleStatus}]"
|
||||||
|
)
|
||||||
|
if not migrationStep.modulePresent:
|
||||||
|
missingSteps.append(migrationStep)
|
||||||
|
|
||||||
|
if missingSteps:
|
||||||
|
firstMissingStep = missingSteps[0]
|
||||||
|
raise DatabaseVersionException(
|
||||||
|
f"No migration path from database version "
|
||||||
|
+ f"{firstMissingStep.versionFrom} to {firstMissingStep.versionTo}"
|
||||||
|
)
|
||||||
|
|
||||||
|
if not click.confirm(
|
||||||
|
"Create a backup and continue with database migration?",
|
||||||
|
default=True,
|
||||||
|
):
|
||||||
|
raise click.ClickException("Database migration aborted by user.")
|
||||||
|
|
||||||
|
backupPath = backupDatabaseBeforeMigration(
|
||||||
|
databaseContext,
|
||||||
|
currentDatabaseVersion,
|
||||||
|
targetDatabaseVersion,
|
||||||
|
)
|
||||||
|
click.echo(f"Database backup created: {backupPath}")
|
||||||
|
|
||||||
|
|
||||||
|
def backupDatabaseBeforeMigration(databaseContext, currentDatabaseVersion: int, targetDatabaseVersion: int) -> str:
|
||||||
|
databasePath = databaseContext.get('path', '')
|
||||||
|
if not databasePath or databasePath == ':memory:':
|
||||||
|
raise click.ClickException("Database migration backup requires a file-backed SQLite database.")
|
||||||
|
|
||||||
|
if not os.path.isfile(databasePath):
|
||||||
|
raise click.ClickException(f"Database file not found for backup: {databasePath}")
|
||||||
|
|
||||||
|
backupPath = f"{databasePath}.v{currentDatabaseVersion}-to-v{targetDatabaseVersion}.bak"
|
||||||
|
backupIndex = 1
|
||||||
|
while os.path.exists(backupPath):
|
||||||
|
backupPath = (
|
||||||
|
f"{databasePath}.v{currentDatabaseVersion}-to-v{targetDatabaseVersion}.{backupIndex}.bak"
|
||||||
|
)
|
||||||
|
backupIndex += 1
|
||||||
|
|
||||||
|
databaseContext['engine'].dispose()
|
||||||
|
shutil.copy2(databasePath, backupPath)
|
||||||
|
|
||||||
|
return backupPath
|
||||||
|
|
||||||
|
|
||||||
def getDatabaseVersion(databaseContext):
|
def getDatabaseVersion(databaseContext):
|
||||||
@@ -67,9 +190,9 @@ def getDatabaseVersion(databaseContext):
|
|||||||
|
|
||||||
Session = databaseContext['session']
|
Session = databaseContext['session']
|
||||||
s = Session()
|
s = Session()
|
||||||
q = s.query(Property).filter(Property.key == DATABASE_VERSION_KEY)
|
versionProperty = s.query(Property).filter(Property.key == DATABASE_VERSION_KEY).first()
|
||||||
|
|
||||||
return int(q.first().value) if q.count() else 0
|
return int(versionProperty.value) if versionProperty is not None else 0
|
||||||
|
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
raise click.ClickException(f"getDatabaseVersion(): {repr(ex)}")
|
raise click.ClickException(f"getDatabaseVersion(): {repr(ex)}")
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
import os, click
|
import os, click
|
||||||
|
from logging import Logger
|
||||||
|
|
||||||
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
|
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
|
||||||
|
|
||||||
@@ -9,26 +10,37 @@ from ffx.track_codec import TrackCodec
|
|||||||
from ffx.video_encoder import VideoEncoder
|
from ffx.video_encoder import VideoEncoder
|
||||||
from ffx.process import executeProcess
|
from ffx.process import executeProcess
|
||||||
|
|
||||||
from ffx.constants import DEFAULT_cut_start, DEFAULT_cut_length
|
from ffx.constants import (
|
||||||
|
DEFAULT_CONTAINER_EXTENSION,
|
||||||
|
DEFAULT_CONTAINER_FORMAT,
|
||||||
|
DEFAULT_VIDEO_ENCODER_LABEL,
|
||||||
|
DEFAULT_cut_start,
|
||||||
|
DEFAULT_cut_length,
|
||||||
|
FFMPEG_COMMAND_TOKENS,
|
||||||
|
FFMPEG_NULL_OUTPUT_TOKENS,
|
||||||
|
SUPPORTED_INPUT_FILE_EXTENSIONS,
|
||||||
|
)
|
||||||
|
|
||||||
from ffx.filter.quality_filter import QualityFilter
|
from ffx.filter.quality_filter import QualityFilter
|
||||||
from ffx.filter.preset_filter import PresetFilter
|
from ffx.filter.preset_filter import PresetFilter
|
||||||
from ffx.filter.crop_filter import CropFilter
|
from ffx.filter.crop_filter import CropFilter
|
||||||
|
|
||||||
|
from ffx.model.pattern import Pattern
|
||||||
|
|
||||||
|
|
||||||
class FfxController():
|
class FfxController():
|
||||||
|
|
||||||
COMMAND_TOKENS = ['ffmpeg', '-y']
|
COMMAND_TOKENS = list(FFMPEG_COMMAND_TOKENS)
|
||||||
NULL_TOKENS = ['-f', 'null', '/dev/null'] # -f null /dev/null
|
NULL_TOKENS = list(FFMPEG_NULL_OUTPUT_TOKENS) # -f null /dev/null
|
||||||
|
|
||||||
TEMP_FILE_NAME = "ffmpeg2pass-0.log"
|
TEMP_FILE_NAME = "ffmpeg2pass-0.log"
|
||||||
|
|
||||||
DEFAULT_VIDEO_ENCODER = VideoEncoder.VP9.label()
|
DEFAULT_VIDEO_ENCODER = DEFAULT_VIDEO_ENCODER_LABEL
|
||||||
|
|
||||||
DEFAULT_FILE_FORMAT = 'webm'
|
DEFAULT_FILE_FORMAT = DEFAULT_CONTAINER_FORMAT
|
||||||
DEFAULT_FILE_EXTENSION = 'webm'
|
DEFAULT_FILE_EXTENSION = DEFAULT_CONTAINER_EXTENSION
|
||||||
|
|
||||||
INPUT_FILE_EXTENSIONS = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
INPUT_FILE_EXTENSIONS = list(SUPPORTED_INPUT_FILE_EXTENSIONS)
|
||||||
|
|
||||||
CHANNEL_MAP_5_1 = 'FL-FL|FR-FR|FC-FC|LFE-LFE|SL-BL|SR-BR:5.1'
|
CHANNEL_MAP_5_1 = 'FL-FL|FR-FR|FC-FC|LFE-LFE|SL-BL|SR-BR:5.1'
|
||||||
|
|
||||||
@@ -42,11 +54,20 @@ class FfxController():
|
|||||||
self.__context = context
|
self.__context = context
|
||||||
|
|
||||||
self.__targetMediaDescriptor = targetMediaDescriptor
|
self.__targetMediaDescriptor = targetMediaDescriptor
|
||||||
|
self.__sourceMediaDescriptor = sourceMediaDescriptor
|
||||||
|
|
||||||
self.__mdcs = MediaDescriptorChangeSet(context,
|
self.__mdcs = MediaDescriptorChangeSet(context,
|
||||||
targetMediaDescriptor,
|
targetMediaDescriptor,
|
||||||
sourceMediaDescriptor)
|
sourceMediaDescriptor)
|
||||||
|
|
||||||
self.__logger = context['logger']
|
self.__logger: Logger = context['logger']
|
||||||
|
|
||||||
|
|
||||||
|
def executeCommandSequence(self, commandSequence):
|
||||||
|
out, err, rc = executeProcess(commandSequence, context=self.__context)
|
||||||
|
if rc:
|
||||||
|
raise click.ClickException(f"Command resulted in error: rc={rc} error={err}")
|
||||||
|
return out, err, rc
|
||||||
|
|
||||||
|
|
||||||
def generateAV1Tokens(self, quality, preset, subIndex : int = 0):
|
def generateAV1Tokens(self, quality, preset, subIndex : int = 0):
|
||||||
@@ -95,6 +116,37 @@ class FfxController():
|
|||||||
return [f"-c:v:{int(subIndex)}",
|
return [f"-c:v:{int(subIndex)}",
|
||||||
'copy']
|
'copy']
|
||||||
|
|
||||||
|
def generateAudioCopyTokens(self, subIndex):
|
||||||
|
return [f"-c:a:{int(subIndex)}", 'copy']
|
||||||
|
|
||||||
|
def generateSubtitleCopyTokens(self, subIndex):
|
||||||
|
return [f"-c:s:{int(subIndex)}", 'copy']
|
||||||
|
|
||||||
|
def generateAttachmentCopyTokens(self, subIndex):
|
||||||
|
return [f"-c:t:{int(subIndex)}", 'copy']
|
||||||
|
|
||||||
|
def generateCopyTokens(self):
|
||||||
|
copyTokens = []
|
||||||
|
|
||||||
|
for trackDescriptor in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
|
||||||
|
copyTokens += self.generateVideoCopyTokens(trackDescriptor.getSubIndex())
|
||||||
|
|
||||||
|
for trackDescriptor in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.AUDIO):
|
||||||
|
copyTokens += self.generateAudioCopyTokens(trackDescriptor.getSubIndex())
|
||||||
|
|
||||||
|
for trackDescriptor in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.SUBTITLE):
|
||||||
|
copyTokens += self.generateSubtitleCopyTokens(trackDescriptor.getSubIndex())
|
||||||
|
|
||||||
|
attachmentDescriptors = (
|
||||||
|
self.__sourceMediaDescriptor.getTrackDescriptors(trackType=TrackType.ATTACHMENT)
|
||||||
|
if self.__sourceMediaDescriptor is not None
|
||||||
|
else self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.ATTACHMENT)
|
||||||
|
)
|
||||||
|
for trackDescriptor in attachmentDescriptors:
|
||||||
|
copyTokens += self.generateAttachmentCopyTokens(trackDescriptor.getSubIndex())
|
||||||
|
|
||||||
|
return copyTokens
|
||||||
|
|
||||||
|
|
||||||
def generateCropTokens(self):
|
def generateCropTokens(self):
|
||||||
|
|
||||||
@@ -119,6 +171,18 @@ class FfxController():
|
|||||||
return [outputFilePath]
|
return [outputFilePath]
|
||||||
|
|
||||||
|
|
||||||
|
def generateEncodingMetadataTags(self, videoEncoder: VideoEncoder, quality, preset) -> dict:
|
||||||
|
metadataTags = {}
|
||||||
|
|
||||||
|
if videoEncoder in (VideoEncoder.AV1, VideoEncoder.H264, VideoEncoder.VP9):
|
||||||
|
metadataTags["ENCODING_QUALITY"] = str(quality)
|
||||||
|
|
||||||
|
if videoEncoder == VideoEncoder.AV1:
|
||||||
|
metadataTags["ENCODING_PRESET"] = str(preset)
|
||||||
|
|
||||||
|
return metadataTags
|
||||||
|
|
||||||
|
|
||||||
def generateAudioEncodingTokens(self):
|
def generateAudioEncodingTokens(self):
|
||||||
"""Generates ffmpeg options audio streams including channel remapping, codec and bitrate"""
|
"""Generates ffmpeg options audio streams including channel remapping, codec and bitrate"""
|
||||||
|
|
||||||
@@ -179,12 +243,17 @@ class FfxController():
|
|||||||
sourcePath,
|
sourcePath,
|
||||||
targetPath,
|
targetPath,
|
||||||
targetFormat: str = '',
|
targetFormat: str = '',
|
||||||
videoEncoder: VideoEncoder = VideoEncoder.VP9,
|
|
||||||
chainIteration: list = [],
|
chainIteration: list = [],
|
||||||
cropArguments: dict = {}):
|
cropArguments: dict = {},
|
||||||
|
currentPattern: Pattern = None,
|
||||||
|
currentShowDescriptor = None):
|
||||||
# quality: int = DEFAULT_QUALITY,
|
# quality: int = DEFAULT_QUALITY,
|
||||||
# preset: int = DEFAULT_AV1_PRESET):
|
# preset: int = DEFAULT_AV1_PRESET):
|
||||||
|
|
||||||
|
|
||||||
|
videoEncoder: VideoEncoder = self.__context.get('video_encoder', VideoEncoder.VP9)
|
||||||
|
|
||||||
|
|
||||||
qualityFilters = [fy for fy in chainIteration if fy['identifier'] == 'quality']
|
qualityFilters = [fy for fy in chainIteration if fy['identifier'] == 'quality']
|
||||||
presetFilters = [fy for fy in chainIteration if fy['identifier'] == 'preset']
|
presetFilters = [fy for fy in chainIteration if fy['identifier'] == 'preset']
|
||||||
|
|
||||||
@@ -192,8 +261,26 @@ class FfxController():
|
|||||||
denoiseFilters = [fy for fy in chainIteration if fy['identifier'] == 'nlmeans']
|
denoiseFilters = [fy for fy in chainIteration if fy['identifier'] == 'nlmeans']
|
||||||
deinterlaceFilters = [fy for fy in chainIteration if fy['identifier'] == 'bwdif']
|
deinterlaceFilters = [fy for fy in chainIteration if fy['identifier'] == 'bwdif']
|
||||||
|
|
||||||
quality = (qualityFilters[0]['parameters']['quality'] if qualityFilters else QualityFilter.DEFAULT_VP9_QUALITY)
|
|
||||||
|
if qualityFilters and (quality := qualityFilters[0]['parameters']['quality']):
|
||||||
|
self.__logger.info(f"Setting quality {quality} from command line")
|
||||||
|
elif currentPattern is not None and (quality := currentPattern.quality):
|
||||||
|
self.__logger.info(f"Setting quality {quality} from pattern")
|
||||||
|
elif currentShowDescriptor is not None and (quality := currentShowDescriptor.getQuality()):
|
||||||
|
self.__logger.info(f"Setting quality {quality} from show")
|
||||||
|
else:
|
||||||
|
quality = (QualityFilter.DEFAULT_H264_QUALITY
|
||||||
|
if (videoEncoder == VideoEncoder.H264)
|
||||||
|
else QualityFilter.DEFAULT_VP9_QUALITY)
|
||||||
|
self.__logger.info(f"Setting quality {quality} from default")
|
||||||
|
|
||||||
|
|
||||||
preset = presetFilters[0]['parameters']['preset'] if presetFilters else PresetFilter.DEFAULT_PRESET
|
preset = presetFilters[0]['parameters']['preset'] if presetFilters else PresetFilter.DEFAULT_PRESET
|
||||||
|
self.__context['encoding_metadata_tags'] = self.generateEncodingMetadataTags(
|
||||||
|
videoEncoder,
|
||||||
|
quality,
|
||||||
|
preset,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
filterParamTokens = []
|
filterParamTokens = []
|
||||||
@@ -218,11 +305,33 @@ class FfxController():
|
|||||||
|
|
||||||
commandTokens = FfxController.COMMAND_TOKENS + ['-i', sourcePath]
|
commandTokens = FfxController.COMMAND_TOKENS + ['-i', sourcePath]
|
||||||
|
|
||||||
|
if videoEncoder == VideoEncoder.COPY:
|
||||||
|
|
||||||
|
commandSequence = (commandTokens
|
||||||
|
+ self.__targetMediaDescriptor.getImportFileTokens()
|
||||||
|
+ self.__targetMediaDescriptor.getInputMappingTokens(sourceMediaDescriptor = self.__sourceMediaDescriptor)
|
||||||
|
+ self.__mdcs.generateDispositionTokens())
|
||||||
|
|
||||||
|
commandSequence += self.__mdcs.generateMetadataTokens()
|
||||||
|
commandSequence += self.generateCopyTokens()
|
||||||
|
|
||||||
|
if self.__context['perform_cut']:
|
||||||
|
commandSequence += self.generateCropTokens()
|
||||||
|
|
||||||
|
commandSequence += self.generateOutputTokens(targetPath,
|
||||||
|
targetFormat)
|
||||||
|
|
||||||
|
self.__logger.debug("FfxController.runJob(): Running command sequence")
|
||||||
|
|
||||||
|
if not self.__context['dry_run']:
|
||||||
|
self.executeCommandSequence(commandSequence)
|
||||||
|
return
|
||||||
|
|
||||||
if videoEncoder == VideoEncoder.AV1:
|
if videoEncoder == VideoEncoder.AV1:
|
||||||
|
|
||||||
commandSequence = (commandTokens
|
commandSequence = (commandTokens
|
||||||
+ self.__targetMediaDescriptor.getImportFileTokens()
|
+ self.__targetMediaDescriptor.getImportFileTokens()
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens()
|
+ self.__targetMediaDescriptor.getInputMappingTokens(sourceMediaDescriptor = self.__sourceMediaDescriptor)
|
||||||
+ self.__mdcs.generateDispositionTokens())
|
+ self.__mdcs.generateDispositionTokens())
|
||||||
|
|
||||||
# Optional tokens
|
# Optional tokens
|
||||||
@@ -245,14 +354,14 @@ class FfxController():
|
|||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence")
|
self.__logger.debug(f"FfxController.runJob(): Running command sequence")
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
if not self.__context['dry_run']:
|
||||||
executeProcess(commandSequence, context = self.__context)
|
self.executeCommandSequence(commandSequence)
|
||||||
|
|
||||||
|
|
||||||
if videoEncoder == VideoEncoder.H264:
|
if videoEncoder == VideoEncoder.H264:
|
||||||
|
|
||||||
commandSequence = (commandTokens
|
commandSequence = (commandTokens
|
||||||
+ self.__targetMediaDescriptor.getImportFileTokens()
|
+ self.__targetMediaDescriptor.getImportFileTokens()
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens()
|
+ self.__targetMediaDescriptor.getInputMappingTokens(sourceMediaDescriptor = self.__sourceMediaDescriptor)
|
||||||
+ self.__mdcs.generateDispositionTokens())
|
+ self.__mdcs.generateDispositionTokens())
|
||||||
|
|
||||||
# Optional tokens
|
# Optional tokens
|
||||||
@@ -275,7 +384,7 @@ class FfxController():
|
|||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence")
|
self.__logger.debug(f"FfxController.runJob(): Running command sequence")
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
if not self.__context['dry_run']:
|
||||||
executeProcess(commandSequence, context = self.__context)
|
self.executeCommandSequence(commandSequence)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -307,11 +416,11 @@ class FfxController():
|
|||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence 1")
|
self.__logger.debug(f"FfxController.runJob(): Running command sequence 1")
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
if not self.__context['dry_run']:
|
||||||
executeProcess(commandSequence1, context = self.__context)
|
self.executeCommandSequence(commandSequence1)
|
||||||
|
|
||||||
commandSequence2 = (commandTokens
|
commandSequence2 = (commandTokens
|
||||||
+ self.__targetMediaDescriptor.getImportFileTokens()
|
+ self.__targetMediaDescriptor.getImportFileTokens()
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens()
|
+ self.__targetMediaDescriptor.getInputMappingTokens(sourceMediaDescriptor = self.__sourceMediaDescriptor)
|
||||||
+ self.__mdcs.generateDispositionTokens())
|
+ self.__mdcs.generateDispositionTokens())
|
||||||
|
|
||||||
# Optional tokens
|
# Optional tokens
|
||||||
@@ -334,9 +443,7 @@ class FfxController():
|
|||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence 2")
|
self.__logger.debug(f"FfxController.runJob(): Running command sequence 2")
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
if not self.__context['dry_run']:
|
||||||
out, err, rc = executeProcess(commandSequence2, context = self.__context)
|
self.executeCommandSequence(commandSequence2)
|
||||||
if rc:
|
|
||||||
raise click.ClickException(f"Command resulted in error: rc={rc} error={err}")
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -361,4 +468,4 @@ class FfxController():
|
|||||||
str(length),
|
str(length),
|
||||||
path]
|
path]
|
||||||
|
|
||||||
out, err, rc = executeProcess(commandTokens, context = self.__context)
|
self.executeCommandSequence(commandTokens)
|
||||||
|
|||||||
@@ -1,124 +0,0 @@
|
|||||||
#! /usr/bin/python3
|
|
||||||
|
|
||||||
import os, logging, click
|
|
||||||
|
|
||||||
from ffx.configuration_controller import ConfigurationController
|
|
||||||
|
|
||||||
from ffx.file_properties import FileProperties
|
|
||||||
from ffx.ffx_controller import FfxController
|
|
||||||
|
|
||||||
from ffx.test.helper import createMediaTestFile
|
|
||||||
|
|
||||||
from ffx.test.scenario import Scenario
|
|
||||||
from ffx.tmdb_controller import TMDB_API_KEY_NOT_PRESENT_EXCEPTION
|
|
||||||
|
|
||||||
|
|
||||||
@click.group()
|
|
||||||
@click.pass_context
|
|
||||||
@click.option('-v', '--verbose', type=int, default=0, help='Set verbosity of output')
|
|
||||||
@click.option("--dry-run", is_flag=True, default=False)
|
|
||||||
def ffx(ctx, verbose, dry_run):
|
|
||||||
"""FFX"""
|
|
||||||
|
|
||||||
ctx.obj = {}
|
|
||||||
|
|
||||||
ctx.obj['config'] = ConfigurationController()
|
|
||||||
|
|
||||||
ctx.obj['database'] = None
|
|
||||||
ctx.obj['dry_run'] = dry_run
|
|
||||||
|
|
||||||
ctx.obj['verbosity'] = verbose
|
|
||||||
|
|
||||||
# Critical 50
|
|
||||||
# Error 40
|
|
||||||
# Warning 30
|
|
||||||
# Info 20
|
|
||||||
# Debug 10
|
|
||||||
fileLogVerbosity = max(40 - verbose * 10, 10)
|
|
||||||
consoleLogVerbosity = max(20 - verbose * 10, 10)
|
|
||||||
|
|
||||||
ctx.obj['logger'] = logging.getLogger('FFX Tests')
|
|
||||||
ctx.obj['logger'].setLevel(logging.DEBUG)
|
|
||||||
|
|
||||||
ctx.obj['report_logger'] = logging.getLogger('FFX Test Result')
|
|
||||||
ctx.obj['report_logger'].setLevel(logging.INFO)
|
|
||||||
|
|
||||||
ffxFileHandler = logging.FileHandler(ctx.obj['config'].getLogFilePath())
|
|
||||||
ffxFileHandler.setLevel(fileLogVerbosity)
|
|
||||||
ffxConsoleHandler = logging.StreamHandler()
|
|
||||||
ffxConsoleHandler.setLevel(consoleLogVerbosity)
|
|
||||||
|
|
||||||
if os.path.isfile('ffx_test_report.log'):
|
|
||||||
os.unlink('ffx_test_report.log')
|
|
||||||
ffxTestReportFileHandler = logging.FileHandler('ffx_test_report.log')
|
|
||||||
|
|
||||||
fileFormatter = logging.Formatter(
|
|
||||||
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
|
||||||
ffxFileHandler.setFormatter(fileFormatter)
|
|
||||||
consoleFormatter = logging.Formatter(
|
|
||||||
'%(message)s')
|
|
||||||
ffxConsoleHandler.setFormatter(consoleFormatter)
|
|
||||||
reportFormatter = logging.Formatter(
|
|
||||||
'%(message)s')
|
|
||||||
ffxTestReportFileHandler.setFormatter(reportFormatter)
|
|
||||||
|
|
||||||
ctx.obj['logger'].addHandler(ffxConsoleHandler)
|
|
||||||
ctx.obj['logger'].addHandler(ffxFileHandler)
|
|
||||||
|
|
||||||
ctx.obj['report_logger'].addHandler(ffxConsoleHandler)
|
|
||||||
ctx.obj['report_logger'].addHandler(ffxTestReportFileHandler)
|
|
||||||
|
|
||||||
|
|
||||||
# Another subcommand
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
@click.option('--scenario', type=str, default='', help='Only run tests from this scenario')
|
|
||||||
@click.option('--variant', type=str, default='', help='Only run variants beginning like this')
|
|
||||||
@click.option('--limit', type=int, default=0, help='Only run this number of tests')
|
|
||||||
def run(ctx, scenario, variant, limit):
|
|
||||||
"""Run ffx test sequences"""
|
|
||||||
|
|
||||||
ctx.obj['logger'].info('Starting FFX test runs')
|
|
||||||
ctx.obj['test_passed_counter'] = 0
|
|
||||||
ctx.obj['test_failed_counter'] = 0
|
|
||||||
|
|
||||||
ctx.obj['test_variant'] = variant
|
|
||||||
ctx.obj['test_limit'] = limit
|
|
||||||
|
|
||||||
for si in Scenario.list():
|
|
||||||
|
|
||||||
try:
|
|
||||||
SCEN = Scenario.getClassReference(si)
|
|
||||||
scen = SCEN(ctx.obj)
|
|
||||||
|
|
||||||
if scenario and scenario != scen.getScenario():
|
|
||||||
continue
|
|
||||||
|
|
||||||
ctx.obj['logger'].debug(f"Running scenario {si}")
|
|
||||||
|
|
||||||
scen.run()
|
|
||||||
|
|
||||||
except TMDB_API_KEY_NOT_PRESENT_EXCEPTION:
|
|
||||||
ctx.obj['logger'].info(f"TMDB_API_KEY not set: Skipping {SCEN.__class__.__name__}")
|
|
||||||
|
|
||||||
ctx.obj['logger'].info(f"\n{ctx.obj['test_passed_counter']} tests passed")
|
|
||||||
ctx.obj['logger'].info(f"{ctx.obj['test_failed_counter']} test failed")
|
|
||||||
ctx.obj['logger'].info('\nDone.')
|
|
||||||
|
|
||||||
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
@click.argument('paths', nargs=-1)
|
|
||||||
def dupe(ctx, paths):
|
|
||||||
|
|
||||||
existingSourcePaths = [p for p in paths if os.path.isfile(p) and p.split('.')[-1] in FfxController.INPUT_FILE_EXTENSIONS]
|
|
||||||
|
|
||||||
for sourcePath in existingSourcePaths:
|
|
||||||
|
|
||||||
sourceFileProperties = FileProperties(ctx.obj, sourcePath)
|
|
||||||
sourceMediaDescriptor = sourceFileProperties.getMediaDescriptor()
|
|
||||||
|
|
||||||
createMediaTestFile(sourceMediaDescriptor, baseName='dupe')
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
ffx()
|
|
||||||
@@ -1,5 +1,11 @@
|
|||||||
import os, re, json
|
import os, re, json
|
||||||
|
|
||||||
|
from .constants import (
|
||||||
|
DEFAULT_CROPDETECT_DURATION_SECONDS,
|
||||||
|
DEFAULT_CROPDETECT_SEEK_SECONDS,
|
||||||
|
FFMPEG_COMMAND_TOKENS,
|
||||||
|
FFMPEG_NULL_OUTPUT_TOKENS,
|
||||||
|
)
|
||||||
from .media_descriptor import MediaDescriptor
|
from .media_descriptor import MediaDescriptor
|
||||||
from .pattern_controller import PatternController
|
from .pattern_controller import PatternController
|
||||||
|
|
||||||
@@ -11,8 +17,10 @@ from ffx.model.pattern import Pattern
|
|||||||
|
|
||||||
|
|
||||||
class FileProperties():
|
class FileProperties():
|
||||||
|
_cropdetect_cache: dict[tuple[str, int, int, int, int], dict[str, str]] = {}
|
||||||
|
|
||||||
FILE_EXTENSIONS = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
FILE_EXTENSIONS = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
||||||
|
FFPROBE_COMMAND_TOKENS = ["ffprobe", "-hide_banner", "-show_format", "-show_streams", "-of", "json"]
|
||||||
|
|
||||||
SE_INDICATOR_PATTERN = '([sS][0-9]+[eE][0-9]+)'
|
SE_INDICATOR_PATTERN = '([sS][0-9]+[eE][0-9]+)'
|
||||||
SEASON_EPISODE_INDICATOR_MATCH = '[sS]([0-9]+)[eE]([0-9]+)'
|
SEASON_EPISODE_INDICATOR_MATCH = '[sS]([0-9]+)[eE]([0-9]+)'
|
||||||
@@ -22,6 +30,18 @@ class FileProperties():
|
|||||||
|
|
||||||
DEFAULT_INDEX_DIGITS = 3
|
DEFAULT_INDEX_DIGITS = 3
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def extractSeasonEpisodeValues(cls, sourceText: str) -> tuple[int | None, int] | None:
|
||||||
|
seasonEpisodeMatch = re.search(cls.SEASON_EPISODE_INDICATOR_MATCH, str(sourceText))
|
||||||
|
if seasonEpisodeMatch is not None:
|
||||||
|
return int(seasonEpisodeMatch.group(1)), int(seasonEpisodeMatch.group(2))
|
||||||
|
|
||||||
|
episodeMatch = re.search(cls.EPISODE_INDICATOR_MATCH, str(sourceText))
|
||||||
|
if episodeMatch is not None:
|
||||||
|
return None, int(episodeMatch.group(1))
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
def __init__(self, context, sourcePath):
|
def __init__(self, context, sourcePath):
|
||||||
|
|
||||||
self.context = context
|
self.context = context
|
||||||
@@ -44,9 +64,10 @@ class FileProperties():
|
|||||||
self.__sourceFilenameExtension = ''
|
self.__sourceFilenameExtension = ''
|
||||||
|
|
||||||
self.__pc = PatternController(context)
|
self.__pc = PatternController(context)
|
||||||
|
self.__usePattern = bool(self.context.get('use_pattern', True))
|
||||||
|
|
||||||
# Checking if database contains matching pattern
|
# Checking if database contains matching pattern
|
||||||
matchResult = self.__pc.matchFilename(self.__sourceFilename)
|
matchResult = self.__pc.matchFilename(self.__sourceFilename) if self.__usePattern else {}
|
||||||
|
|
||||||
self.__logger.debug(f"FileProperties.__init__(): Match result: {matchResult}")
|
self.__logger.debug(f"FileProperties.__init__(): Match result: {matchResult}")
|
||||||
|
|
||||||
@@ -56,26 +77,67 @@ class FileProperties():
|
|||||||
databaseMatchedGroups = matchResult['match'].groups()
|
databaseMatchedGroups = matchResult['match'].groups()
|
||||||
self.__logger.debug(f"FileProperties.__init__(): Matched groups: {databaseMatchedGroups}")
|
self.__logger.debug(f"FileProperties.__init__(): Matched groups: {databaseMatchedGroups}")
|
||||||
|
|
||||||
seIndicator = databaseMatchedGroups[0]
|
indicatorSource = databaseMatchedGroups[0]
|
||||||
|
|
||||||
se_match = re.search(FileProperties.SEASON_EPISODE_INDICATOR_MATCH, seIndicator)
|
|
||||||
e_match = re.search(FileProperties.EPISODE_INDICATOR_MATCH, seIndicator)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
self.__logger.debug(f"FileProperties.__init__(): Checking file name for indicator {self.__sourceFilename}")
|
self.__logger.debug(f"FileProperties.__init__(): Checking file name for indicator {self.__sourceFilename}")
|
||||||
|
indicatorSource = self.__sourceFilename
|
||||||
|
|
||||||
se_match = re.search(FileProperties.SEASON_EPISODE_INDICATOR_MATCH, self.__sourceFilename)
|
seasonEpisodeValues = self.extractSeasonEpisodeValues(indicatorSource)
|
||||||
e_match = re.search(FileProperties.EPISODE_INDICATOR_MATCH, self.__sourceFilename)
|
if seasonEpisodeValues is None:
|
||||||
|
|
||||||
if se_match is not None:
|
|
||||||
self.__season = int(se_match.group(1))
|
|
||||||
self.__episode = int(se_match.group(2))
|
|
||||||
elif e_match is not None:
|
|
||||||
self.__season = -1
|
|
||||||
self.__episode = int(e_match.group(1))
|
|
||||||
else:
|
|
||||||
self.__season = -1
|
self.__season = -1
|
||||||
self.__episode = -1
|
self.__episode = -1
|
||||||
|
else:
|
||||||
|
sourceSeason, sourceEpisode = seasonEpisodeValues
|
||||||
|
self.__season = -1 if sourceSeason is None else int(sourceSeason)
|
||||||
|
self.__episode = int(sourceEpisode)
|
||||||
|
|
||||||
|
self.__ffprobeData = None
|
||||||
|
|
||||||
|
def _getCropdetectWindow(self):
|
||||||
|
cropdetectContext = self.context.get('cropdetect', {})
|
||||||
|
|
||||||
|
seekSeconds = int(cropdetectContext.get('seek_seconds', DEFAULT_CROPDETECT_SEEK_SECONDS))
|
||||||
|
durationSeconds = int(cropdetectContext.get('duration_seconds', DEFAULT_CROPDETECT_DURATION_SECONDS))
|
||||||
|
|
||||||
|
if seekSeconds < 0:
|
||||||
|
raise ValueError("Crop detection seek seconds must be zero or greater.")
|
||||||
|
if durationSeconds <= 0:
|
||||||
|
raise ValueError("Crop detection duration seconds must be greater than zero.")
|
||||||
|
|
||||||
|
return seekSeconds, durationSeconds
|
||||||
|
|
||||||
|
def _getCropdetectCacheKey(self):
|
||||||
|
sourceStat = os.stat(self.__sourcePath)
|
||||||
|
seekSeconds, durationSeconds = self._getCropdetectWindow()
|
||||||
|
|
||||||
|
return (
|
||||||
|
os.path.abspath(self.__sourcePath),
|
||||||
|
sourceStat.st_mtime_ns,
|
||||||
|
sourceStat.st_size,
|
||||||
|
seekSeconds,
|
||||||
|
durationSeconds,
|
||||||
|
)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def _clear_cropdetect_cache(cls):
|
||||||
|
cls._cropdetect_cache.clear()
|
||||||
|
|
||||||
|
def _getFfprobeData(self):
|
||||||
|
if self.__ffprobeData is not None:
|
||||||
|
return self.__ffprobeData
|
||||||
|
|
||||||
|
ffprobeOutput, ffprobeError, returnCode = executeProcess(
|
||||||
|
FileProperties.FFPROBE_COMMAND_TOKENS + [self.__sourcePath]
|
||||||
|
)
|
||||||
|
|
||||||
|
if 'Invalid data found when processing input' in ffprobeError:
|
||||||
|
raise Exception(f"File {self.__sourcePath} does not contain valid stream data")
|
||||||
|
|
||||||
|
if returnCode != 0:
|
||||||
|
raise Exception(f"ffprobe returned with error {returnCode}")
|
||||||
|
|
||||||
|
self.__ffprobeData = json.loads(ffprobeOutput)
|
||||||
|
return self.__ffprobeData
|
||||||
|
|
||||||
|
|
||||||
def getFormatData(self):
|
def getFormatData(self):
|
||||||
@@ -98,22 +160,7 @@ class FileProperties():
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
"""
|
"""
|
||||||
|
return self._getFfprobeData()['format']
|
||||||
# ffprobe -hide_banner -show_format -of json
|
|
||||||
ffprobeOutput, ffprobeError, returnCode = executeProcess(["ffprobe",
|
|
||||||
"-hide_banner",
|
|
||||||
"-show_format",
|
|
||||||
"-of", "json",
|
|
||||||
self.__sourcePath]) #,
|
|
||||||
#context = self.context)
|
|
||||||
|
|
||||||
if 'Invalid data found when processing input' in ffprobeError:
|
|
||||||
raise Exception(f"File {self.__sourcePath} does not contain valid stream data")
|
|
||||||
|
|
||||||
if returnCode != 0:
|
|
||||||
raise Exception(f"ffprobe returned with error {returnCode}")
|
|
||||||
|
|
||||||
return json.loads(ffprobeOutput)['format']
|
|
||||||
|
|
||||||
|
|
||||||
def getStreamData(self):
|
def getStreamData(self):
|
||||||
@@ -158,40 +205,32 @@ class FileProperties():
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
"""
|
"""
|
||||||
|
return self._getFfprobeData()['streams']
|
||||||
# ffprobe -hide_banner -show_streams -of json
|
|
||||||
ffprobeOutput, ffprobeError, returnCode = executeProcess(["ffprobe",
|
|
||||||
"-hide_banner",
|
|
||||||
"-show_streams",
|
|
||||||
"-of", "json",
|
|
||||||
self.__sourcePath]) #,
|
|
||||||
#context = self.context)
|
|
||||||
|
|
||||||
if 'Invalid data found when processing input' in ffprobeError:
|
|
||||||
raise Exception(f"File {self.__sourcePath} does not contain valid stream data")
|
|
||||||
|
|
||||||
|
|
||||||
if returnCode != 0:
|
|
||||||
raise Exception(f"ffprobe returned with error {returnCode}")
|
|
||||||
|
|
||||||
|
|
||||||
return json.loads(ffprobeOutput)['streams']
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def findCropArguments(self):
|
def findCropArguments(self):
|
||||||
""""""
|
""""""
|
||||||
|
|
||||||
# ffmpeg -i <input.file> -vf cropdetect -f null -
|
cacheKey = self._getCropdetectCacheKey()
|
||||||
ffprobeOutput, ffprobeError, returnCode = executeProcess(["ffmpeg", "-i",
|
cachedCropArguments = FileProperties._cropdetect_cache.get(cacheKey)
|
||||||
|
if cachedCropArguments is not None:
|
||||||
|
self.__logger.debug(
|
||||||
|
"FileProperties.findCropArguments(): Reusing cached cropdetect result for %s",
|
||||||
self.__sourcePath,
|
self.__sourcePath,
|
||||||
"-vf", "cropdetect",
|
)
|
||||||
"-ss", "60",
|
return dict(cachedCropArguments)
|
||||||
"-t", "180",
|
|
||||||
"-f", "null", "-"
|
|
||||||
])
|
|
||||||
|
|
||||||
errorLines = ffprobeError.split('\n')
|
seekSeconds, durationSeconds = self._getCropdetectWindow()
|
||||||
|
|
||||||
|
cropdetectCommand = (
|
||||||
|
list(FFMPEG_COMMAND_TOKENS)
|
||||||
|
+ ["-ss", str(seekSeconds), "-i", self.__sourcePath, "-t", str(durationSeconds), "-vf", "cropdetect"]
|
||||||
|
+ list(FFMPEG_NULL_OUTPUT_TOKENS)
|
||||||
|
)
|
||||||
|
_ffmpegOutput, ffmpegError, returnCode = executeProcess(cropdetectCommand, context=self.context)
|
||||||
|
|
||||||
|
errorLines = ffmpegError.split('\n')
|
||||||
|
|
||||||
crops = {}
|
crops = {}
|
||||||
for el in errorLines:
|
for el in errorLines:
|
||||||
@@ -204,20 +243,25 @@ class FileProperties():
|
|||||||
crops[cropParam] = crops.get(cropParam, 0) + 1
|
crops[cropParam] = crops.get(cropParam, 0) + 1
|
||||||
|
|
||||||
if crops:
|
if crops:
|
||||||
cropHistogram = sorted(crops, reverse=True)
|
cropString = max(crops.items(), key=lambda item: (item[1], item[0]))[0]
|
||||||
cropString = cropHistogram[0]
|
|
||||||
|
|
||||||
cropTokens = cropString.split('=')
|
cropTokens = cropString.split('=')
|
||||||
cropValueTokens = cropTokens[1]
|
cropValueTokens = cropTokens[1]
|
||||||
cropValues = cropValueTokens.split(':')
|
cropValues = cropValueTokens.split(':')
|
||||||
|
|
||||||
return {
|
cropArguments = {
|
||||||
CropFilter.OUTPUT_WIDTH_KEY: cropValues[0],
|
CropFilter.OUTPUT_WIDTH_KEY: cropValues[0],
|
||||||
CropFilter.OUTPUT_HEIGHT_KEY: cropValues[1],
|
CropFilter.OUTPUT_HEIGHT_KEY: cropValues[1],
|
||||||
CropFilter.OFFSET_X_KEY: cropValues[2],
|
CropFilter.OFFSET_X_KEY: cropValues[2],
|
||||||
CropFilter.OFFSET_Y_KEY: cropValues[3]
|
CropFilter.OFFSET_Y_KEY: cropValues[3]
|
||||||
}
|
}
|
||||||
else:
|
FileProperties._cropdetect_cache[cacheKey] = dict(cropArguments)
|
||||||
|
return cropArguments
|
||||||
|
|
||||||
|
if returnCode != 0:
|
||||||
|
raise Exception(f"ffmpeg cropdetect returned with error {returnCode}")
|
||||||
|
|
||||||
|
FileProperties._cropdetect_cache[cacheKey] = {}
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,9 @@
|
|||||||
import itertools
|
import click
|
||||||
|
|
||||||
from .filter import Filter
|
from .filter import Filter
|
||||||
|
|
||||||
|
from ffx.video_encoder import VideoEncoder
|
||||||
|
|
||||||
|
|
||||||
class QualityFilter(Filter):
|
class QualityFilter(Filter):
|
||||||
|
|
||||||
@@ -14,6 +16,9 @@ class QualityFilter(Filter):
|
|||||||
|
|
||||||
def __init__(self, **kwargs):
|
def __init__(self, **kwargs):
|
||||||
|
|
||||||
|
context = click.get_current_context().obj
|
||||||
|
|
||||||
|
|
||||||
self.__qualitiesList = []
|
self.__qualitiesList = []
|
||||||
qualities = kwargs.get(QualityFilter.QUALITY_KEY, '')
|
qualities = kwargs.get(QualityFilter.QUALITY_KEY, '')
|
||||||
if qualities:
|
if qualities:
|
||||||
@@ -27,7 +32,9 @@ class QualityFilter(Filter):
|
|||||||
raise ValueError('QualityFilter: Quality value has to be between 0 and 63')
|
raise ValueError('QualityFilter: Quality value has to be between 0 and 63')
|
||||||
self.__qualitiesList.append(qualityValue)
|
self.__qualitiesList.append(qualityValue)
|
||||||
else:
|
else:
|
||||||
self.__qualitiesList = [QualityFilter.DEFAULT_VP9_QUALITY]
|
|
||||||
|
self.__qualitiesList = [None]
|
||||||
|
|
||||||
|
|
||||||
super().__init__(self)
|
super().__init__(self)
|
||||||
|
|
||||||
|
|||||||
@@ -1,8 +1,10 @@
|
|||||||
import re, logging
|
import re
|
||||||
|
|
||||||
from jinja2 import Environment, Undefined
|
from jinja2 import Environment, Undefined
|
||||||
from .constants import DEFAULT_OUTPUT_FILENAME_TEMPLATE
|
from .constants import DEFAULT_OUTPUT_FILENAME_TEMPLATE
|
||||||
from .configuration_controller import ConfigurationController
|
from .configuration_controller import ConfigurationController
|
||||||
|
from .logging_utils import get_ffx_logger
|
||||||
|
from .show_descriptor import ShowDescriptor
|
||||||
|
|
||||||
|
|
||||||
class EmptyStringUndefined(Undefined):
|
class EmptyStringUndefined(Undefined):
|
||||||
@@ -15,7 +17,21 @@ DIFF_REMOVED_KEY = 'removed'
|
|||||||
DIFF_CHANGED_KEY = 'changed'
|
DIFF_CHANGED_KEY = 'changed'
|
||||||
DIFF_UNCHANGED_KEY = 'unchanged'
|
DIFF_UNCHANGED_KEY = 'unchanged'
|
||||||
|
|
||||||
RICH_COLOR_PATTERN = '\[[a-z_]+\](.+)\[\/[a-z_]+\]'
|
FILENAME_FILTER_TRANSLATION = str.maketrans(
|
||||||
|
{
|
||||||
|
"/": "-",
|
||||||
|
":": ";",
|
||||||
|
"*": "",
|
||||||
|
"'": "",
|
||||||
|
"?": "#",
|
||||||
|
"♥": "",
|
||||||
|
"’": "",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
TMDB_FILLER_MARKERS = (" (*)", "(*)")
|
||||||
|
TMDB_EPISODE_RANGE_SUFFIX_REGEX = re.compile(r"\(([0-9]+)[-/]([0-9]+)\)$")
|
||||||
|
TMDB_EPISODE_PART_SUFFIX_REGEX = re.compile(r"\(([0-9]+)\)$")
|
||||||
|
RICH_COLOR_REGEX = re.compile(r"\[[a-z_]+\](.+)\[/[a-z_]+\]")
|
||||||
|
|
||||||
|
|
||||||
def dictDiff(a : dict, b : dict, ignoreKeys: list = [], removeKeys: list = []):
|
def dictDiff(a : dict, b : dict, ignoreKeys: list = [], removeKeys: list = []):
|
||||||
@@ -114,49 +130,45 @@ def filterFilename(fileName: str) -> str:
|
|||||||
"""This filter replaces charactes from TMDB responses with characters
|
"""This filter replaces charactes from TMDB responses with characters
|
||||||
less problemating when using in filenames or removes them"""
|
less problemating when using in filenames or removes them"""
|
||||||
|
|
||||||
fileName = str(fileName).replace('/', '-')
|
return str(fileName).translate(FILENAME_FILTER_TRANSLATION).strip()
|
||||||
fileName = str(fileName).replace(':', ';')
|
|
||||||
fileName = str(fileName).replace('*', '')
|
|
||||||
fileName = str(fileName).replace("'", '')
|
|
||||||
fileName = str(fileName).replace("?", '#')
|
|
||||||
fileName = str(fileName).replace('♥', '')
|
|
||||||
fileName = str(fileName).replace('’', '')
|
|
||||||
|
|
||||||
return fileName.strip()
|
|
||||||
|
|
||||||
def substituteTmdbFilename(fileName: str) -> str:
|
def substituteTmdbFilename(fileName: str) -> str:
|
||||||
"""If chaining this method with filterFilename use this one first as the latter will destroy some patterns"""
|
"""If chaining this method with filterFilename use this one first as the latter will destroy some patterns"""
|
||||||
|
|
||||||
# This indicates filler episodes in TMDB episode names
|
normalizedFileName = str(fileName)
|
||||||
fileName = str(fileName).replace(' (*)', '')
|
|
||||||
fileName = str(fileName).replace('(*)', '')
|
|
||||||
|
|
||||||
# This indicates the index of multi-episode files
|
for fillerMarker in TMDB_FILLER_MARKERS:
|
||||||
episodePartMatch = re.search("\\(([0-9]+)\\)$", fileName)
|
normalizedFileName = normalizedFileName.replace(fillerMarker, '')
|
||||||
|
|
||||||
|
episodeRangeMatch = TMDB_EPISODE_RANGE_SUFFIX_REGEX.search(normalizedFileName)
|
||||||
|
if episodeRangeMatch is not None:
|
||||||
|
partFirstIndex, partLastIndex = episodeRangeMatch.groups()
|
||||||
|
return TMDB_EPISODE_RANGE_SUFFIX_REGEX.sub(
|
||||||
|
f"Teil {partFirstIndex}-{partLastIndex}",
|
||||||
|
normalizedFileName,
|
||||||
|
count=1,
|
||||||
|
)
|
||||||
|
|
||||||
|
episodePartMatch = TMDB_EPISODE_PART_SUFFIX_REGEX.search(normalizedFileName)
|
||||||
if episodePartMatch is not None:
|
if episodePartMatch is not None:
|
||||||
partSuffix = str(episodePartMatch.group(0))
|
partIndex = episodePartMatch.group(1)
|
||||||
partIndex = episodePartMatch.groups()[0]
|
return TMDB_EPISODE_PART_SUFFIX_REGEX.sub(
|
||||||
fileName = str(fileName).replace(partSuffix, f"Teil {partIndex}")
|
f"Teil {partIndex}",
|
||||||
|
normalizedFileName,
|
||||||
|
count=1,
|
||||||
|
)
|
||||||
|
|
||||||
# Also multi-episodes with first and last episode index
|
return normalizedFileName
|
||||||
episodePartMatch = re.search("\\(([0-9]+)[-\\/]([0-9]+)\\)$", fileName)
|
|
||||||
if episodePartMatch is not None:
|
|
||||||
partSuffix = str(episodePartMatch.group(0))
|
|
||||||
partFirstIndex = episodePartMatch.groups()[0]
|
|
||||||
partLastIndex = episodePartMatch.groups()[1]
|
|
||||||
fileName = str(fileName).replace(partSuffix, f"Teil {partFirstIndex}-{partLastIndex}")
|
|
||||||
|
|
||||||
return fileName
|
|
||||||
|
|
||||||
|
|
||||||
def getEpisodeFileBasename(showName,
|
def getEpisodeFileBasename(showName,
|
||||||
episodeName,
|
episodeName,
|
||||||
season,
|
season,
|
||||||
episode,
|
episode,
|
||||||
indexSeasonDigits = 2,
|
indexSeasonDigits = None,
|
||||||
indexEpisodeDigits = 2,
|
indexEpisodeDigits = None,
|
||||||
indicatorSeasonDigits = 2,
|
indicatorSeasonDigits = None,
|
||||||
indicatorEpisodeDigits = 2,
|
indicatorEpisodeDigits = None,
|
||||||
context = None):
|
context = None):
|
||||||
"""
|
"""
|
||||||
One Piece:
|
One Piece:
|
||||||
@@ -188,12 +200,21 @@ def getEpisodeFileBasename(showName,
|
|||||||
configData = cc.getData() if cc is not None else {}
|
configData = cc.getData() if cc is not None else {}
|
||||||
outputFilenameTemplate = configData.get(ConfigurationController.OUTPUT_FILENAME_TEMPLATE_KEY,
|
outputFilenameTemplate = configData.get(ConfigurationController.OUTPUT_FILENAME_TEMPLATE_KEY,
|
||||||
DEFAULT_OUTPUT_FILENAME_TEMPLATE)
|
DEFAULT_OUTPUT_FILENAME_TEMPLATE)
|
||||||
|
defaultDigitLengths = ShowDescriptor.getDefaultDigitLengths(context)
|
||||||
|
|
||||||
|
if indexSeasonDigits is None:
|
||||||
|
indexSeasonDigits = defaultDigitLengths[ShowDescriptor.INDEX_SEASON_DIGITS_KEY]
|
||||||
|
if indexEpisodeDigits is None:
|
||||||
|
indexEpisodeDigits = defaultDigitLengths[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY]
|
||||||
|
if indicatorSeasonDigits is None:
|
||||||
|
indicatorSeasonDigits = defaultDigitLengths[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY]
|
||||||
|
if indicatorEpisodeDigits is None:
|
||||||
|
indicatorEpisodeDigits = defaultDigitLengths[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY]
|
||||||
|
|
||||||
if context is not None and 'logger' in context.keys():
|
if context is not None and 'logger' in context.keys():
|
||||||
logger = context['logger']
|
logger = context['logger']
|
||||||
else:
|
else:
|
||||||
logger = logging.getLogger('FFX')
|
logger = get_ffx_logger()
|
||||||
logger.addHandler(logging.NullHandler())
|
|
||||||
|
|
||||||
|
|
||||||
indexSeparator = ' ' if indexSeasonDigits or indexEpisodeDigits else ''
|
indexSeparator = ' ' if indexSeasonDigits or indexEpisodeDigits else ''
|
||||||
@@ -231,9 +252,8 @@ def formatRichColor(text: str, color: str = None):
|
|||||||
return f"[{color}]{text}[/{color}]"
|
return f"[{color}]{text}[/{color}]"
|
||||||
|
|
||||||
def removeRichColor(text: str):
|
def removeRichColor(text: str):
|
||||||
richColorMatch = re.search(RICH_COLOR_PATTERN, text)
|
richColorMatch = RICH_COLOR_REGEX.search(str(text))
|
||||||
if richColorMatch is None:
|
if richColorMatch is None:
|
||||||
return text
|
return text
|
||||||
else:
|
else:
|
||||||
return str(richColorMatch.group(1))
|
return str(richColorMatch.group(1))
|
||||||
|
|
||||||
|
|||||||
@@ -1,68 +1,155 @@
|
|||||||
from enum import Enum
|
from enum import Enum
|
||||||
import difflib
|
import difflib
|
||||||
|
|
||||||
|
|
||||||
class IsoLanguage(Enum):
|
class IsoLanguage(Enum):
|
||||||
|
|
||||||
|
ABKHAZIAN = {"name": "Abkhazian", "iso639_1": "ab", "iso639_2": ["abk"]}
|
||||||
|
AFAR = {"name": "Afar", "iso639_1": "aa", "iso639_2": ["aar"]}
|
||||||
AFRIKAANS = {"name": "Afrikaans", "iso639_1": "af", "iso639_2": ["afr"]}
|
AFRIKAANS = {"name": "Afrikaans", "iso639_1": "af", "iso639_2": ["afr"]}
|
||||||
ALBANIAN = {"name": "Albanian", "iso639_1": "sq", "iso639_2": ["alb"]}
|
AKAN = {"name": "Akan", "iso639_1": "ak", "iso639_2": ["aka"]}
|
||||||
|
ALBANIAN = {"name": "Albanian", "iso639_1": "sq", "iso639_2": ["sqi", "alb"]}
|
||||||
|
AMHARIC = {"name": "Amharic", "iso639_1": "am", "iso639_2": ["amh"]}
|
||||||
ARABIC = {"name": "Arabic", "iso639_1": "ar", "iso639_2": ["ara"]}
|
ARABIC = {"name": "Arabic", "iso639_1": "ar", "iso639_2": ["ara"]}
|
||||||
ARMENIAN = {"name": "Armenian", "iso639_1": "hy", "iso639_2": ["arm"]}
|
ARAGONESE = {"name": "Aragonese", "iso639_1": "an", "iso639_2": ["arg"]}
|
||||||
|
ARMENIAN = {"name": "Armenian", "iso639_1": "hy", "iso639_2": ["hye", "arm"]}
|
||||||
|
ASSAMESE = {"name": "Assamese", "iso639_1": "as", "iso639_2": ["asm"]}
|
||||||
|
AVARIC = {"name": "Avaric", "iso639_1": "av", "iso639_2": ["ava"]}
|
||||||
|
AVESTAN = {"name": "Avestan", "iso639_1": "ae", "iso639_2": ["ave"]}
|
||||||
|
AYMARA = {"name": "Aymara", "iso639_1": "ay", "iso639_2": ["aym"]}
|
||||||
AZERBAIJANI = {"name": "Azerbaijani", "iso639_1": "az", "iso639_2": ["aze"]}
|
AZERBAIJANI = {"name": "Azerbaijani", "iso639_1": "az", "iso639_2": ["aze"]}
|
||||||
BASQUE = {"name": "Basque", "iso639_1": "eu", "iso639_2": ["baq"]}
|
BAMBARA = {"name": "Bambara", "iso639_1": "bm", "iso639_2": ["bam"]}
|
||||||
|
BASHKIR = {"name": "Bashkir", "iso639_1": "ba", "iso639_2": ["bak"]}
|
||||||
|
BASQUE = {"name": "Basque", "iso639_1": "eu", "iso639_2": ["eus", "baq"]}
|
||||||
BELARUSIAN = {"name": "Belarusian", "iso639_1": "be", "iso639_2": ["bel"]}
|
BELARUSIAN = {"name": "Belarusian", "iso639_1": "be", "iso639_2": ["bel"]}
|
||||||
BOKMAL = {"name": "Bokmål", "iso639_1": "nb", "iso639_2": ["nob"]} # Norwegian Bokmål
|
BENGALI = {"name": "Bengali", "iso639_1": "bn", "iso639_2": ["ben"]}
|
||||||
|
BISLAMA = {"name": "Bislama", "iso639_1": "bi", "iso639_2": ["bis"]}
|
||||||
|
BOKMAL = {"name": "Bokmål", "iso639_1": "nb", "iso639_2": ["nob"]}
|
||||||
|
BOSNIAN = {"name": "Bosnian", "iso639_1": "bs", "iso639_2": ["bos"]}
|
||||||
|
BRETON = {"name": "Breton", "iso639_1": "br", "iso639_2": ["bre"]}
|
||||||
BULGARIAN = {"name": "Bulgarian", "iso639_1": "bg", "iso639_2": ["bul"]}
|
BULGARIAN = {"name": "Bulgarian", "iso639_1": "bg", "iso639_2": ["bul"]}
|
||||||
|
BURMESE = {"name": "Burmese", "iso639_1": "my", "iso639_2": ["mya", "bur"]}
|
||||||
CATALAN = {"name": "Catalan", "iso639_1": "ca", "iso639_2": ["cat"]}
|
CATALAN = {"name": "Catalan", "iso639_1": "ca", "iso639_2": ["cat"]}
|
||||||
|
CHAMORRO = {"name": "Chamorro", "iso639_1": "ch", "iso639_2": ["cha"]}
|
||||||
|
CHECHEN = {"name": "Chechen", "iso639_1": "ce", "iso639_2": ["che"]}
|
||||||
|
CHICHEWA = {"name": "Chichewa", "iso639_1": "ny", "iso639_2": ["nya"]}
|
||||||
CHINESE = {"name": "Chinese", "iso639_1": "zh", "iso639_2": ["zho", "chi"]}
|
CHINESE = {"name": "Chinese", "iso639_1": "zh", "iso639_2": ["zho", "chi"]}
|
||||||
|
CHURCH_SLAVIC = {"name": "Church Slavic", "iso639_1": "cu", "iso639_2": ["chu"]}
|
||||||
|
CHUVASH = {"name": "Chuvash", "iso639_1": "cv", "iso639_2": ["chv"]}
|
||||||
|
CORNISH = {"name": "Cornish", "iso639_1": "kw", "iso639_2": ["cor"]}
|
||||||
|
CORSICAN = {"name": "Corsican", "iso639_1": "co", "iso639_2": ["cos"]}
|
||||||
|
CREE = {"name": "Cree", "iso639_1": "cr", "iso639_2": ["cre"]}
|
||||||
CROATIAN = {"name": "Croatian", "iso639_1": "hr", "iso639_2": ["hrv"]}
|
CROATIAN = {"name": "Croatian", "iso639_1": "hr", "iso639_2": ["hrv"]}
|
||||||
CZECH = {"name": "Czech", "iso639_1": "cs", "iso639_2": ["cze"]}
|
CZECH = {"name": "Czech", "iso639_1": "cs", "iso639_2": ["ces", "cze"]}
|
||||||
DANISH = {"name": "Danish", "iso639_1": "da", "iso639_2": ["dan"]}
|
DANISH = {"name": "Danish", "iso639_1": "da", "iso639_2": ["dan"]}
|
||||||
|
DIVEHI = {"name": "Divehi", "iso639_1": "dv", "iso639_2": ["div"]}
|
||||||
DUTCH = {"name": "Dutch", "iso639_1": "nl", "iso639_2": ["nld", "dut"]}
|
DUTCH = {"name": "Dutch", "iso639_1": "nl", "iso639_2": ["nld", "dut"]}
|
||||||
|
DZONGKHA = {"name": "Dzongkha", "iso639_1": "dz", "iso639_2": ["dzo"]}
|
||||||
ENGLISH = {"name": "English", "iso639_1": "en", "iso639_2": ["eng"]}
|
ENGLISH = {"name": "English", "iso639_1": "en", "iso639_2": ["eng"]}
|
||||||
|
ESPERANTO = {"name": "Esperanto", "iso639_1": "eo", "iso639_2": ["epo"]}
|
||||||
ESTONIAN = {"name": "Estonian", "iso639_1": "et", "iso639_2": ["est"]}
|
ESTONIAN = {"name": "Estonian", "iso639_1": "et", "iso639_2": ["est"]}
|
||||||
FILIPINO = {"name": "Filipino", "iso639_1": "tl", "iso639_2": ["fil"]} # Tagalog
|
EWE = {"name": "Ewe", "iso639_1": "ee", "iso639_2": ["ewe"]}
|
||||||
|
FAROESE = {"name": "Faroese", "iso639_1": "fo", "iso639_2": ["fao"]}
|
||||||
|
FIJIAN = {"name": "Fijian", "iso639_1": "fj", "iso639_2": ["fij"]}
|
||||||
FINNISH = {"name": "Finnish", "iso639_1": "fi", "iso639_2": ["fin"]}
|
FINNISH = {"name": "Finnish", "iso639_1": "fi", "iso639_2": ["fin"]}
|
||||||
FRENCH = {"name": "French", "iso639_1": "fr", "iso639_2": ["fra", "fre"]}
|
FRENCH = {"name": "French", "iso639_1": "fr", "iso639_2": ["fra", "fre"]}
|
||||||
|
FULAH = {"name": "Fulah", "iso639_1": "ff", "iso639_2": ["ful"]}
|
||||||
GALICIAN = {"name": "Galician", "iso639_1": "gl", "iso639_2": ["glg"]}
|
GALICIAN = {"name": "Galician", "iso639_1": "gl", "iso639_2": ["glg"]}
|
||||||
GEORGIAN = {"name": "Georgian", "iso639_1": "ka", "iso639_2": ["geo"]}
|
GANDA = {"name": "Ganda", "iso639_1": "lg", "iso639_2": ["lug"]}
|
||||||
|
GEORGIAN = {"name": "Georgian", "iso639_1": "ka", "iso639_2": ["kat", "geo"]}
|
||||||
GERMAN = {"name": "German", "iso639_1": "de", "iso639_2": ["deu", "ger"]}
|
GERMAN = {"name": "German", "iso639_1": "de", "iso639_2": ["deu", "ger"]}
|
||||||
GREEK = {"name": "Greek", "iso639_1": "el", "iso639_2": ["gre"]}
|
GREEK = {"name": "Greek", "iso639_1": "el", "iso639_2": ["ell", "gre"]}
|
||||||
|
GUARANI = {"name": "Guarani", "iso639_1": "gn", "iso639_2": ["grn"]}
|
||||||
|
GUJARATI = {"name": "Gujarati", "iso639_1": "gu", "iso639_2": ["guj"]}
|
||||||
|
HAITIAN = {"name": "Haitian", "iso639_1": "ht", "iso639_2": ["hat"]}
|
||||||
|
HAUSA = {"name": "Hausa", "iso639_1": "ha", "iso639_2": ["hau"]}
|
||||||
HEBREW = {"name": "Hebrew", "iso639_1": "he", "iso639_2": ["heb"]}
|
HEBREW = {"name": "Hebrew", "iso639_1": "he", "iso639_2": ["heb"]}
|
||||||
|
HERERO = {"name": "Herero", "iso639_1": "hz", "iso639_2": ["her"]}
|
||||||
HINDI = {"name": "Hindi", "iso639_1": "hi", "iso639_2": ["hin"]}
|
HINDI = {"name": "Hindi", "iso639_1": "hi", "iso639_2": ["hin"]}
|
||||||
|
HIRI_MOTU = {"name": "Hiri Motu", "iso639_1": "ho", "iso639_2": ["hmo"]}
|
||||||
HUNGARIAN = {"name": "Hungarian", "iso639_1": "hu", "iso639_2": ["hun"]}
|
HUNGARIAN = {"name": "Hungarian", "iso639_1": "hu", "iso639_2": ["hun"]}
|
||||||
ICELANDIC = {"name": "Icelandic", "iso639_1": "is", "iso639_2": ["ice"]}
|
ICELANDIC = {"name": "Icelandic", "iso639_1": "is", "iso639_2": ["isl", "ice"]}
|
||||||
|
IDO = {"name": "Ido", "iso639_1": "io", "iso639_2": ["ido"]}
|
||||||
|
IGBO = {"name": "Igbo", "iso639_1": "ig", "iso639_2": ["ibo"]}
|
||||||
INDONESIAN = {"name": "Indonesian", "iso639_1": "id", "iso639_2": ["ind"]}
|
INDONESIAN = {"name": "Indonesian", "iso639_1": "id", "iso639_2": ["ind"]}
|
||||||
|
INTERLINGUA = {"name": "Interlingua", "iso639_1": "ia", "iso639_2": ["ina"]}
|
||||||
|
INTERLINGUE = {"name": "Interlingue", "iso639_1": "ie", "iso639_2": ["ile"]}
|
||||||
|
INUKTITUT = {"name": "Inuktitut", "iso639_1": "iu", "iso639_2": ["iku"]}
|
||||||
|
INUPIAQ = {"name": "Inupiaq", "iso639_1": "ik", "iso639_2": ["ipk"]}
|
||||||
IRISH = {"name": "Irish", "iso639_1": "ga", "iso639_2": ["gle"]}
|
IRISH = {"name": "Irish", "iso639_1": "ga", "iso639_2": ["gle"]}
|
||||||
ITALIAN = {"name": "Italian", "iso639_1": "it", "iso639_2": ["ita"]}
|
ITALIAN = {"name": "Italian", "iso639_1": "it", "iso639_2": ["ita"]}
|
||||||
JAPANESE = {"name": "Japanese", "iso639_1": "ja", "iso639_2": ["jpn"]}
|
JAPANESE = {"name": "Japanese", "iso639_1": "ja", "iso639_2": ["jpn"]}
|
||||||
|
JAVANESE = {"name": "Javanese", "iso639_1": "jv", "iso639_2": ["jav"]}
|
||||||
|
KALAALLISUT = {"name": "Kalaallisut", "iso639_1": "kl", "iso639_2": ["kal"]}
|
||||||
KANNADA = {"name": "Kannada", "iso639_1": "kn", "iso639_2": ["kan"]}
|
KANNADA = {"name": "Kannada", "iso639_1": "kn", "iso639_2": ["kan"]}
|
||||||
|
KANURI = {"name": "Kanuri", "iso639_1": "kr", "iso639_2": ["kau"]}
|
||||||
|
KASHMIRI = {"name": "Kashmiri", "iso639_1": "ks", "iso639_2": ["kas"]}
|
||||||
KAZAKH = {"name": "Kazakh", "iso639_1": "kk", "iso639_2": ["kaz"]}
|
KAZAKH = {"name": "Kazakh", "iso639_1": "kk", "iso639_2": ["kaz"]}
|
||||||
|
KHMER = {"name": "Khmer", "iso639_1": "km", "iso639_2": ["khm"]}
|
||||||
|
KIKUYU = {"name": "Kikuyu", "iso639_1": "ki", "iso639_2": ["kik"]}
|
||||||
|
KINYARWANDA = {"name": "Kinyarwanda", "iso639_1": "rw", "iso639_2": ["kin"]}
|
||||||
|
KIRGHIZ = {"name": "Kirghiz", "iso639_1": "ky", "iso639_2": ["kir"]}
|
||||||
|
KOMI = {"name": "Komi", "iso639_1": "kv", "iso639_2": ["kom"]}
|
||||||
|
KONGO = {"name": "Kongo", "iso639_1": "kg", "iso639_2": ["kon"]}
|
||||||
KOREAN = {"name": "Korean", "iso639_1": "ko", "iso639_2": ["kor"]}
|
KOREAN = {"name": "Korean", "iso639_1": "ko", "iso639_2": ["kor"]}
|
||||||
|
KUANYAMA = {"name": "Kuanyama", "iso639_1": "kj", "iso639_2": ["kua"]}
|
||||||
|
KURDISH = {"name": "Kurdish", "iso639_1": "ku", "iso639_2": ["kur"]}
|
||||||
|
LAO = {"name": "Lao", "iso639_1": "lo", "iso639_2": ["lao"]}
|
||||||
LATIN = {"name": "Latin", "iso639_1": "la", "iso639_2": ["lat"]}
|
LATIN = {"name": "Latin", "iso639_1": "la", "iso639_2": ["lat"]}
|
||||||
LATVIAN = {"name": "Latvian", "iso639_1": "lv", "iso639_2": ["lav"]}
|
LATVIAN = {"name": "Latvian", "iso639_1": "lv", "iso639_2": ["lav"]}
|
||||||
|
LIMBURGAN = {"name": "Limburgan", "iso639_1": "li", "iso639_2": ["lim"]}
|
||||||
|
LINGALA = {"name": "Lingala", "iso639_1": "ln", "iso639_2": ["lin"]}
|
||||||
LITHUANIAN = {"name": "Lithuanian", "iso639_1": "lt", "iso639_2": ["lit"]}
|
LITHUANIAN = {"name": "Lithuanian", "iso639_1": "lt", "iso639_2": ["lit"]}
|
||||||
MACEDONIAN = {"name": "Macedonian", "iso639_1": "mk", "iso639_2": ["mac"]}
|
LUBA_KATANGA = {"name": "Luba-Katanga", "iso639_1": "lu", "iso639_2": ["lub"]}
|
||||||
MALAY = {"name": "Malay", "iso639_1": "ms", "iso639_2": ["may"]}
|
LUXEMBOURGISH = {"name": "Luxembourgish", "iso639_1": "lb", "iso639_2": ["ltz"]}
|
||||||
|
MACEDONIAN = {"name": "Macedonian", "iso639_1": "mk", "iso639_2": ["mkd", "mac"]}
|
||||||
|
MALAGASY = {"name": "Malagasy", "iso639_1": "mg", "iso639_2": ["mlg"]}
|
||||||
|
MALAY = {"name": "Malay", "iso639_1": "ms", "iso639_2": ["msa", "may"]}
|
||||||
MALAYALAM = {"name": "Malayalam", "iso639_1": "ml", "iso639_2": ["mal"]}
|
MALAYALAM = {"name": "Malayalam", "iso639_1": "ml", "iso639_2": ["mal"]}
|
||||||
MALTESE = {"name": "Maltese", "iso639_1": "mt", "iso639_2": ["mlt"]}
|
MALTESE = {"name": "Maltese", "iso639_1": "mt", "iso639_2": ["mlt"]}
|
||||||
|
MANX = {"name": "Manx", "iso639_1": "gv", "iso639_2": ["glv"]}
|
||||||
|
MAORI = {"name": "Maori", "iso639_1": "mi", "iso639_2": ["mri", "mao"]}
|
||||||
|
MARATHI = {"name": "Marathi", "iso639_1": "mr", "iso639_2": ["mar"]}
|
||||||
|
MARSHALLESE = {"name": "Marshallese", "iso639_1": "mh", "iso639_2": ["mah"]}
|
||||||
|
MONGOLIAN = {"name": "Mongolian", "iso639_1": "mn", "iso639_2": ["mon"]}
|
||||||
|
NAURU = {"name": "Nauru", "iso639_1": "na", "iso639_2": ["nau"]}
|
||||||
|
NAVAJO = {"name": "Navajo", "iso639_1": "nv", "iso639_2": ["nav"]}
|
||||||
|
NDONGA = {"name": "Ndonga", "iso639_1": "ng", "iso639_2": ["ndo"]}
|
||||||
|
NEPALI = {"name": "Nepali", "iso639_1": "ne", "iso639_2": ["nep"]}
|
||||||
|
NORTH_NDEBELE = {"name": "North Ndebele", "iso639_1": "nd", "iso639_2": ["nde"]}
|
||||||
|
NORTHERN_SAMI = {"name": "Northern Sami", "iso639_1": "se", "iso639_2": ["sme"]}
|
||||||
NORWEGIAN = {"name": "Norwegian", "iso639_1": "no", "iso639_2": ["nor"]}
|
NORWEGIAN = {"name": "Norwegian", "iso639_1": "no", "iso639_2": ["nor"]}
|
||||||
PERSIAN = {"name": "Persian", "iso639_1": "fa", "iso639_2": ["per"]}
|
NORWEGIAN_NYNORSK = {"name": "Nynorsk", "iso639_1": "nn", "iso639_2": ["nno"]}
|
||||||
|
OCCITAN = {"name": "Occitan", "iso639_1": "oc", "iso639_2": ["oci"]}
|
||||||
|
OJIBWA = {"name": "Ojibwa", "iso639_1": "oj", "iso639_2": ["oji"]}
|
||||||
|
ORIYA = {"name": "Oriya", "iso639_1": "or", "iso639_2": ["ori"]}
|
||||||
|
OROMO = {"name": "Oromo", "iso639_1": "om", "iso639_2": ["orm"]}
|
||||||
|
OSSETIAN = {"name": "Ossetian", "iso639_1": "os", "iso639_2": ["oss"]}
|
||||||
|
PALI = {"name": "Pali", "iso639_1": "pi", "iso639_2": ["pli"]}
|
||||||
|
PANJABI = {"name": "Panjabi", "iso639_1": "pa", "iso639_2": ["pan"]}
|
||||||
|
PERSIAN = {"name": "Persian", "iso639_1": "fa", "iso639_2": ["fas", "per"]}
|
||||||
POLISH = {"name": "Polish", "iso639_1": "pl", "iso639_2": ["pol"]}
|
POLISH = {"name": "Polish", "iso639_1": "pl", "iso639_2": ["pol"]}
|
||||||
PORTUGUESE = {"name": "Portuguese", "iso639_1": "pt", "iso639_2": ["por"]}
|
PORTUGUESE = {"name": "Portuguese", "iso639_1": "pt", "iso639_2": ["por"]}
|
||||||
ROMANIAN = {"name": "Romanian", "iso639_1": "ro", "iso639_2": ["rum"]}
|
PUSHTO = {"name": "Pushto", "iso639_1": "ps", "iso639_2": ["pus"]}
|
||||||
|
QUECHUA = {"name": "Quechua", "iso639_1": "qu", "iso639_2": ["que"]}
|
||||||
|
ROMANIAN = {"name": "Romanian", "iso639_1": "ro", "iso639_2": ["ron", "rum"]}
|
||||||
|
ROMANSH = {"name": "Romansh", "iso639_1": "rm", "iso639_2": ["roh"]}
|
||||||
|
RUNDI = {"name": "Rundi", "iso639_1": "rn", "iso639_2": ["run"]}
|
||||||
RUSSIAN = {"name": "Russian", "iso639_1": "ru", "iso639_2": ["rus"]}
|
RUSSIAN = {"name": "Russian", "iso639_1": "ru", "iso639_2": ["rus"]}
|
||||||
NORTHERN_SAMI = {"name": "Northern Sami", "iso639_1": "se", "iso639_2": ["sme"]}
|
|
||||||
SAMOAN = {"name": "Samoan", "iso639_1": "sm", "iso639_2": ["smo"]}
|
SAMOAN = {"name": "Samoan", "iso639_1": "sm", "iso639_2": ["smo"]}
|
||||||
SANGO = {"name": "Sango", "iso639_1": "sg", "iso639_2": ["sag"]}
|
SANGO = {"name": "Sango", "iso639_1": "sg", "iso639_2": ["sag"]}
|
||||||
SANSKRIT = {"name": "Sanskrit", "iso639_1": "sa", "iso639_2": ["san"]}
|
SANSKRIT = {"name": "Sanskrit", "iso639_1": "sa", "iso639_2": ["san"]}
|
||||||
SARDINIAN = {"name": "Sardinian", "iso639_1": "sc", "iso639_2": ["srd"]}
|
SARDINIAN = {"name": "Sardinian", "iso639_1": "sc", "iso639_2": ["srd"]}
|
||||||
|
SCOTTISH_GAELIC = {"name": "Scottish Gaelic", "iso639_1": "gd", "iso639_2": ["gla"]}
|
||||||
SERBIAN = {"name": "Serbian", "iso639_1": "sr", "iso639_2": ["srp"]}
|
SERBIAN = {"name": "Serbian", "iso639_1": "sr", "iso639_2": ["srp"]}
|
||||||
SHONA = {"name": "Shona", "iso639_1": "sn", "iso639_2": ["sna"]}
|
SHONA = {"name": "Shona", "iso639_1": "sn", "iso639_2": ["sna"]}
|
||||||
|
SICHUAN_YI = {"name": "Sichuan Yi", "iso639_1": "ii", "iso639_2": ["iii"]}
|
||||||
SINDHI = {"name": "Sindhi", "iso639_1": "sd", "iso639_2": ["snd"]}
|
SINDHI = {"name": "Sindhi", "iso639_1": "sd", "iso639_2": ["snd"]}
|
||||||
SINHALA = {"name": "Sinhala", "iso639_1": "si", "iso639_2": ["sin"]}
|
SINHALA = {"name": "Sinhala", "iso639_1": "si", "iso639_2": ["sin"]}
|
||||||
SLOVAK = {"name": "Slovak", "iso639_1": "sk", "iso639_2": ["slk"]}
|
SLOVAK = {"name": "Slovak", "iso639_1": "sk", "iso639_2": ["slk", "slo"]}
|
||||||
SLOVENIAN = {"name": "Slovenian", "iso639_1": "sl", "iso639_2": ["slv"]}
|
SLOVENIAN = {"name": "Slovenian", "iso639_1": "sl", "iso639_2": ["slv"]}
|
||||||
SOMALI = {"name": "Somali", "iso639_1": "so", "iso639_2": ["som"]}
|
SOMALI = {"name": "Somali", "iso639_1": "so", "iso639_2": ["som"]}
|
||||||
|
SOUTH_NDEBELE = {"name": "South Ndebele", "iso639_1": "nr", "iso639_2": ["nbl"]}
|
||||||
SOUTHERN_SOTHO = {"name": "Southern Sotho", "iso639_1": "st", "iso639_2": ["sot"]}
|
SOUTHERN_SOTHO = {"name": "Southern Sotho", "iso639_1": "st", "iso639_2": ["sot"]}
|
||||||
SPANISH = {"name": "Spanish", "iso639_1": "es", "iso639_2": ["spa"]}
|
SPANISH = {"name": "Spanish", "iso639_1": "es", "iso639_2": ["spa"]}
|
||||||
SUNDANESE = {"name": "Sundanese", "iso639_1": "su", "iso639_2": ["sun"]}
|
SUNDANESE = {"name": "Sundanese", "iso639_1": "su", "iso639_2": ["sun"]}
|
||||||
@@ -70,14 +157,38 @@ class IsoLanguage(Enum):
|
|||||||
SWATI = {"name": "Swati", "iso639_1": "ss", "iso639_2": ["ssw"]}
|
SWATI = {"name": "Swati", "iso639_1": "ss", "iso639_2": ["ssw"]}
|
||||||
SWEDISH = {"name": "Swedish", "iso639_1": "sv", "iso639_2": ["swe"]}
|
SWEDISH = {"name": "Swedish", "iso639_1": "sv", "iso639_2": ["swe"]}
|
||||||
TAGALOG = {"name": "Tagalog", "iso639_1": "tl", "iso639_2": ["tgl"]}
|
TAGALOG = {"name": "Tagalog", "iso639_1": "tl", "iso639_2": ["tgl"]}
|
||||||
|
TAHITIAN = {"name": "Tahitian", "iso639_1": "ty", "iso639_2": ["tah"]}
|
||||||
|
TAJIK = {"name": "Tajik", "iso639_1": "tg", "iso639_2": ["tgk"]}
|
||||||
TAMIL = {"name": "Tamil", "iso639_1": "ta", "iso639_2": ["tam"]}
|
TAMIL = {"name": "Tamil", "iso639_1": "ta", "iso639_2": ["tam"]}
|
||||||
|
TATAR = {"name": "Tatar", "iso639_1": "tt", "iso639_2": ["tat"]}
|
||||||
TELUGU = {"name": "Telugu", "iso639_1": "te", "iso639_2": ["tel"]}
|
TELUGU = {"name": "Telugu", "iso639_1": "te", "iso639_2": ["tel"]}
|
||||||
THAI = {"name": "Thai", "iso639_1": "th", "iso639_2": ["tha"]}
|
THAI = {"name": "Thai", "iso639_1": "th", "iso639_2": ["tha"]}
|
||||||
|
TIBETAN = {"name": "Tibetan", "iso639_1": "bo", "iso639_2": ["bod", "tib"]}
|
||||||
|
TIGRINYA = {"name": "Tigrinya", "iso639_1": "ti", "iso639_2": ["tir"]}
|
||||||
|
TONGA = {"name": "Tonga", "iso639_1": "to", "iso639_2": ["ton"]}
|
||||||
|
TSONGA = {"name": "Tsonga", "iso639_1": "ts", "iso639_2": ["tso"]}
|
||||||
|
TSWANA = {"name": "Tswana", "iso639_1": "tn", "iso639_2": ["tsn"]}
|
||||||
TURKISH = {"name": "Turkish", "iso639_1": "tr", "iso639_2": ["tur"]}
|
TURKISH = {"name": "Turkish", "iso639_1": "tr", "iso639_2": ["tur"]}
|
||||||
|
TURKMEN = {"name": "Turkmen", "iso639_1": "tk", "iso639_2": ["tuk"]}
|
||||||
|
TWI = {"name": "Twi", "iso639_1": "tw", "iso639_2": ["twi"]}
|
||||||
|
UIGHUR = {"name": "Uighur", "iso639_1": "ug", "iso639_2": ["uig"]}
|
||||||
UKRAINIAN = {"name": "Ukrainian", "iso639_1": "uk", "iso639_2": ["ukr"]}
|
UKRAINIAN = {"name": "Ukrainian", "iso639_1": "uk", "iso639_2": ["ukr"]}
|
||||||
URDU = {"name": "Urdu", "iso639_1": "ur", "iso639_2": ["urd"]}
|
URDU = {"name": "Urdu", "iso639_1": "ur", "iso639_2": ["urd"]}
|
||||||
VIETNAMESE = {"name": "Vietnamese", "iso639_1": "vi", "iso639_2":[ "vie"]}
|
UZBEK = {"name": "Uzbek", "iso639_1": "uz", "iso639_2": ["uzb"]}
|
||||||
WELSH = {"name": "Welsh", "iso639_1": "cy", "iso639_2": ["wel"]}
|
VENDA = {"name": "Venda", "iso639_1": "ve", "iso639_2": ["ven"]}
|
||||||
|
VIETNAMESE = {"name": "Vietnamese", "iso639_1": "vi", "iso639_2": ["vie"]}
|
||||||
|
VOLAPUK = {"name": "Volapük", "iso639_1": "vo", "iso639_2": ["vol"]}
|
||||||
|
WALLOON = {"name": "Walloon", "iso639_1": "wa", "iso639_2": ["wln"]}
|
||||||
|
WELSH = {"name": "Welsh", "iso639_1": "cy", "iso639_2": ["cym", "wel"]}
|
||||||
|
WESTERN_FRISIAN = {"name": "Western Frisian", "iso639_1": "fy", "iso639_2": ["fry"]}
|
||||||
|
WOLOF = {"name": "Wolof", "iso639_1": "wo", "iso639_2": ["wol"]}
|
||||||
|
XHOSA = {"name": "Xhosa", "iso639_1": "xh", "iso639_2": ["xho"]}
|
||||||
|
YIDDISH = {"name": "Yiddish", "iso639_1": "yi", "iso639_2": ["yid"]}
|
||||||
|
YORUBA = {"name": "Yoruba", "iso639_1": "yo", "iso639_2": ["yor"]}
|
||||||
|
ZHUANG = {"name": "Zhuang", "iso639_1": "za", "iso639_2": ["zha"]}
|
||||||
|
ZULU = {"name": "Zulu", "iso639_1": "zu", "iso639_2": ["zul"]}
|
||||||
|
|
||||||
|
FILIPINO = {"name": "Filipino", "iso639_1": "tl", "iso639_2": ["fil"]}
|
||||||
|
|
||||||
UNDEFINED = {"name": "undefined", "iso639_1": "xx", "iso639_2": ["und"]}
|
UNDEFINED = {"name": "undefined", "iso639_1": "xx", "iso639_2": ["und"]}
|
||||||
|
|
||||||
@@ -88,24 +199,22 @@ class IsoLanguage(Enum):
|
|||||||
closestMatches = difflib.get_close_matches(label, [l.value["name"] for l in IsoLanguage], n=1)
|
closestMatches = difflib.get_close_matches(label, [l.value["name"] for l in IsoLanguage], n=1)
|
||||||
|
|
||||||
if closestMatches:
|
if closestMatches:
|
||||||
foundLangs = [l for l in IsoLanguage if l.value['name'] == closestMatches[0]]
|
foundLangs = [l for l in IsoLanguage if l.value["name"] == closestMatches[0]]
|
||||||
return foundLangs[0] if foundLangs else IsoLanguage.UNDEFINED
|
return foundLangs[0] if foundLangs else IsoLanguage.UNDEFINED
|
||||||
else:
|
else:
|
||||||
return IsoLanguage.UNDEFINED
|
return IsoLanguage.UNDEFINED
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def findThreeLetter(theeLetter : str):
|
def findThreeLetter(theeLetter : str):
|
||||||
foundLangs = [l for l in IsoLanguage if str(theeLetter) in l.value['iso639_2']]
|
foundLangs = [l for l in IsoLanguage if str(theeLetter) in l.value["iso639_2"]]
|
||||||
return foundLangs[0] if foundLangs else IsoLanguage.UNDEFINED
|
return foundLangs[0] if foundLangs else IsoLanguage.UNDEFINED
|
||||||
|
|
||||||
|
|
||||||
def label(self):
|
def label(self):
|
||||||
return str(self.value['name'])
|
return str(self.value["name"])
|
||||||
|
|
||||||
def twoLetter(self):
|
def twoLetter(self):
|
||||||
return str(self.value['iso639_1'])
|
return str(self.value["iso639_1"])
|
||||||
|
|
||||||
def threeLetter(self):
|
def threeLetter(self):
|
||||||
return str(self.value['iso639_2'][0])
|
return str(self.value["iso639_2"][0])
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
68
src/ffx/logging_utils.py
Normal file
68
src/ffx/logging_utils.py
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
import logging
|
||||||
|
import os
|
||||||
|
|
||||||
|
|
||||||
|
FFX_LOGGER_NAME = "FFX"
|
||||||
|
CONSOLE_HANDLER_NAME = "ffx-console"
|
||||||
|
FILE_HANDLER_NAME = "ffx-file"
|
||||||
|
|
||||||
|
|
||||||
|
def get_ffx_logger(name: str = FFX_LOGGER_NAME) -> logging.Logger:
|
||||||
|
logger = logging.getLogger(name)
|
||||||
|
logger.setLevel(logging.DEBUG)
|
||||||
|
|
||||||
|
if not logger.handlers:
|
||||||
|
logger.addHandler(logging.NullHandler())
|
||||||
|
|
||||||
|
return logger
|
||||||
|
|
||||||
|
|
||||||
|
def configure_ffx_logger(
|
||||||
|
log_file_path: str,
|
||||||
|
file_level: int,
|
||||||
|
console_level: int,
|
||||||
|
name: str = FFX_LOGGER_NAME,
|
||||||
|
) -> logging.Logger:
|
||||||
|
logger = get_ffx_logger(name)
|
||||||
|
logger.propagate = False
|
||||||
|
|
||||||
|
for handler in list(logger.handlers):
|
||||||
|
if isinstance(handler, logging.NullHandler):
|
||||||
|
logger.removeHandler(handler)
|
||||||
|
|
||||||
|
console_handler = next(
|
||||||
|
(handler for handler in logger.handlers if handler.get_name() == CONSOLE_HANDLER_NAME),
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
if console_handler is None:
|
||||||
|
console_handler = logging.StreamHandler()
|
||||||
|
console_handler.set_name(CONSOLE_HANDLER_NAME)
|
||||||
|
logger.addHandler(console_handler)
|
||||||
|
|
||||||
|
console_handler.setLevel(console_level)
|
||||||
|
console_handler.setFormatter(logging.Formatter("%(message)s"))
|
||||||
|
|
||||||
|
normalized_log_path = os.path.abspath(log_file_path)
|
||||||
|
file_handler = next(
|
||||||
|
(handler for handler in logger.handlers if handler.get_name() == FILE_HANDLER_NAME),
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
if (
|
||||||
|
file_handler is not None
|
||||||
|
and os.path.abspath(file_handler.baseFilename) != normalized_log_path
|
||||||
|
):
|
||||||
|
logger.removeHandler(file_handler)
|
||||||
|
file_handler.close()
|
||||||
|
file_handler = None
|
||||||
|
|
||||||
|
if file_handler is None:
|
||||||
|
file_handler = logging.FileHandler(normalized_log_path)
|
||||||
|
file_handler.set_name(FILE_HANDLER_NAME)
|
||||||
|
logger.addHandler(file_handler)
|
||||||
|
|
||||||
|
file_handler.setLevel(file_level)
|
||||||
|
file_handler.setFormatter(
|
||||||
|
logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
|
||||||
|
)
|
||||||
|
|
||||||
|
return logger
|
||||||
@@ -25,10 +25,9 @@ class MediaController():
|
|||||||
pid = int(patternId)
|
pid = int(patternId)
|
||||||
|
|
||||||
s = self.Session()
|
s = self.Session()
|
||||||
q = s.query(Pattern).filter(Pattern.id == pid)
|
pattern = s.query(Pattern).filter(Pattern.id == pid).first()
|
||||||
|
|
||||||
if q.count():
|
if pattern is not None:
|
||||||
pattern = q.first
|
|
||||||
|
|
||||||
for mediaTagKey, mediaTagValue in mediaDescriptor.getTags():
|
for mediaTagKey, mediaTagValue in mediaDescriptor.getTags():
|
||||||
self.__tac.updateMediaTag(pid, mediaTagKey, mediaTagValue)
|
self.__tac.updateMediaTag(pid, mediaTagKey, mediaTagValue)
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import os, re, click, logging
|
import os, re, click
|
||||||
|
|
||||||
from typing import List, Self
|
from typing import List, Self
|
||||||
|
|
||||||
@@ -9,6 +9,7 @@ from ffx.track_disposition import TrackDisposition
|
|||||||
from ffx.track_codec import TrackCodec
|
from ffx.track_codec import TrackCodec
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
from ffx.track_descriptor import TrackDescriptor
|
||||||
|
from ffx.logging_utils import get_ffx_logger
|
||||||
|
|
||||||
|
|
||||||
class MediaDescriptor:
|
class MediaDescriptor:
|
||||||
@@ -20,6 +21,7 @@ class MediaDescriptor:
|
|||||||
TRACKS_KEY = "tracks"
|
TRACKS_KEY = "tracks"
|
||||||
|
|
||||||
TRACK_DESCRIPTOR_LIST_KEY = "track_descriptors"
|
TRACK_DESCRIPTOR_LIST_KEY = "track_descriptors"
|
||||||
|
ATTACHMENT_DESCRIPTOR_LIST_KEY = "attachment_descriptors"
|
||||||
CLEAR_TAGS_FLAG_KEY = "clear_tags"
|
CLEAR_TAGS_FLAG_KEY = "clear_tags"
|
||||||
|
|
||||||
FFPROBE_DISPOSITION_KEY = "disposition"
|
FFPROBE_DISPOSITION_KEY = "disposition"
|
||||||
@@ -45,8 +47,7 @@ class MediaDescriptor:
|
|||||||
self.__logger = self.__context['logger']
|
self.__logger = self.__context['logger']
|
||||||
else:
|
else:
|
||||||
self.__context = {}
|
self.__context = {}
|
||||||
self.__logger = logging.getLogger('FFX')
|
self.__logger = get_ffx_logger()
|
||||||
self.__logger.addHandler(logging.NullHandler())
|
|
||||||
|
|
||||||
if MediaDescriptor.TAGS_KEY in kwargs.keys():
|
if MediaDescriptor.TAGS_KEY in kwargs.keys():
|
||||||
if type(kwargs[MediaDescriptor.TAGS_KEY]) is not dict:
|
if type(kwargs[MediaDescriptor.TAGS_KEY]) is not dict:
|
||||||
@@ -69,9 +70,9 @@ class MediaDescriptor:
|
|||||||
raise TypeError(
|
raise TypeError(
|
||||||
f"TrackDesciptor.__init__(): All elements of argument list {MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY} are required to be of type TrackDescriptor"
|
f"TrackDesciptor.__init__(): All elements of argument list {MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY} are required to be of type TrackDescriptor"
|
||||||
)
|
)
|
||||||
self.__trackDescriptors = kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY]
|
self.__trackDescriptors: List[TrackDescriptor] = kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY]
|
||||||
else:
|
else:
|
||||||
self.__trackDescriptors = []
|
self.__trackDescriptors: List[TrackDescriptor] = []
|
||||||
|
|
||||||
def setTrackLanguage(self, language: str, index: int, trackType: TrackType = None):
|
def setTrackLanguage(self, language: str, index: int, trackType: TrackType = None):
|
||||||
|
|
||||||
@@ -206,7 +207,7 @@ class MediaDescriptor:
|
|||||||
def rearrangeTrackDescriptors(self, newOrder: List[int]):
|
def rearrangeTrackDescriptors(self, newOrder: List[int]):
|
||||||
if len(newOrder) != len(self.__trackDescriptors):
|
if len(newOrder) != len(self.__trackDescriptors):
|
||||||
raise ValueError('Length of list with reordered indices does not match number of track descriptors')
|
raise ValueError('Length of list with reordered indices does not match number of track descriptors')
|
||||||
reorderedTrackDescriptors = {}
|
reorderedTrackDescriptors = []
|
||||||
for oldIndex in newOrder:
|
for oldIndex in newOrder:
|
||||||
reorderedTrackDescriptors.append(self.__trackDescriptors[oldIndex])
|
reorderedTrackDescriptors.append(self.__trackDescriptors[oldIndex])
|
||||||
self.__trackDescriptors = reorderedTrackDescriptors
|
self.__trackDescriptors = reorderedTrackDescriptors
|
||||||
@@ -320,6 +321,13 @@ class MediaDescriptor:
|
|||||||
if s.getType() == TrackType.SUBTITLE
|
if s.getType() == TrackType.SUBTITLE
|
||||||
]
|
]
|
||||||
|
|
||||||
|
def getAttachmentTracks(self) -> List[TrackDescriptor]:
|
||||||
|
return [
|
||||||
|
s
|
||||||
|
for s in self.__trackDescriptors
|
||||||
|
if s.getType() == TrackType.ATTACHMENT
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
def getImportFileTokens(self, use_sub_index: bool = True):
|
def getImportFileTokens(self, use_sub_index: bool = True):
|
||||||
"""Generate ffmpeg import options for external stream files"""
|
"""Generate ffmpeg import options for external stream files"""
|
||||||
@@ -345,12 +353,23 @@ class MediaDescriptor:
|
|||||||
return importFileTokens
|
return importFileTokens
|
||||||
|
|
||||||
|
|
||||||
def getInputMappingTokens(self, use_sub_index: bool = True, only_video: bool = False):
|
def getInputMappingTokens(self,
|
||||||
|
use_sub_index: bool = True,
|
||||||
|
only_video: bool = False,
|
||||||
|
sourceMediaDescriptor: Self = None):
|
||||||
"""Tracks must be reordered for source index order"""
|
"""Tracks must be reordered for source index order"""
|
||||||
|
|
||||||
inputMappingTokens = []
|
inputMappingTokens = []
|
||||||
|
|
||||||
sortedTrackDescriptors = sorted(self.__trackDescriptors, key=lambda d: d.getIndex())
|
sortedTrackDescriptors = sorted(self.__trackDescriptors, key=lambda d: d.getIndex())
|
||||||
|
sourceTrackDescriptorsByIndex = {
|
||||||
|
td.getIndex(): td
|
||||||
|
for td in (
|
||||||
|
sourceMediaDescriptor.getTrackDescriptors()
|
||||||
|
if sourceMediaDescriptor is not None
|
||||||
|
else sortedTrackDescriptors
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
# raise click.ClickException(' '.join([f"\nindex={td.getIndex()} subIndex={td.getSubIndex()} srcIndex={td.getSourceIndex()} type={td.getType().label()}" for td in self.__trackDescriptors]))
|
# raise click.ClickException(' '.join([f"\nindex={td.getIndex()} subIndex={td.getSubIndex()} srcIndex={td.getSourceIndex()} type={td.getType().label()}" for td in self.__trackDescriptors]))
|
||||||
|
|
||||||
@@ -362,12 +381,19 @@ class MediaDescriptor:
|
|||||||
#HINT: Attached thumbnails are not supported by .webm container format
|
#HINT: Attached thumbnails are not supported by .webm container format
|
||||||
if td.getCodec() != TrackCodec.PNG:
|
if td.getCodec() != TrackCodec.PNG:
|
||||||
|
|
||||||
stdi = sortedTrackDescriptors[td.getSourceIndex()].getIndex()
|
sourceTrackDescriptor = sourceTrackDescriptorsByIndex.get(td.getSourceIndex())
|
||||||
stdsi = sortedTrackDescriptors[td.getSourceIndex()].getSubIndex()
|
if sourceTrackDescriptor is None:
|
||||||
|
raise ValueError(f"No source track descriptor found for source index {td.getSourceIndex()}")
|
||||||
|
|
||||||
|
stdi = sourceTrackDescriptor.getIndex()
|
||||||
|
stdsi = sourceTrackDescriptor.getSubIndex()
|
||||||
|
|
||||||
trackType = td.getType()
|
trackType = td.getType()
|
||||||
|
trackCodec = td.getCodec()
|
||||||
|
|
||||||
|
if (trackType != TrackType.ATTACHMENT
|
||||||
|
and (trackType == TrackType.VIDEO or not only_video)):
|
||||||
|
|
||||||
if (trackType == TrackType.VIDEO or not only_video):
|
|
||||||
|
|
||||||
importedFilePath = td.getExternalSourceFilePath()
|
importedFilePath = td.getExternalSourceFilePath()
|
||||||
|
|
||||||
@@ -383,16 +409,27 @@ class MediaDescriptor:
|
|||||||
|
|
||||||
else:
|
else:
|
||||||
|
|
||||||
if not td.getCodec() in [TrackCodec.PGS, TrackCodec.VOBSUB]:
|
if not trackCodec in [TrackCodec.PGS, TrackCodec.VOBSUB]:
|
||||||
inputMappingTokens += [
|
inputMappingTokens += [
|
||||||
"-map",
|
"-map",
|
||||||
f"0:{trackType.indicator()}:{stdsi}",
|
f"0:{trackType.indicator()}:{stdsi}",
|
||||||
]
|
]
|
||||||
|
|
||||||
else:
|
else:
|
||||||
if not td.getCodec() in [TrackCodec.PGS, TrackCodec.VOBSUB]:
|
if not trackCodec in [TrackCodec.PGS, TrackCodec.VOBSUB]:
|
||||||
inputMappingTokens += ["-map", f"0:{stdi}"]
|
inputMappingTokens += ["-map", f"0:{stdi}"]
|
||||||
|
|
||||||
|
if sourceMediaDescriptor:
|
||||||
|
fontDescriptors = [ftd for ftd in sourceMediaDescriptor.getAttachmentTracks()
|
||||||
|
if ftd.getCodec() == TrackCodec.TTF]
|
||||||
|
else:
|
||||||
|
fontDescriptors = [ftd for ftd in self.__trackDescriptors
|
||||||
|
if ftd.getType() == TrackType.ATTACHMENT
|
||||||
|
and ftd.getCodec() == TrackCodec.TTF]
|
||||||
|
|
||||||
|
for ad in sorted(fontDescriptors, key=lambda d: d.getIndex()):
|
||||||
|
inputMappingTokens += ["-map", f"0:{ad.getIndex()}"]
|
||||||
|
|
||||||
return inputMappingTokens
|
return inputMappingTokens
|
||||||
|
|
||||||
|
|
||||||
@@ -463,7 +500,14 @@ class MediaDescriptor:
|
|||||||
return subtitleFileDescriptors
|
return subtitleFileDescriptors
|
||||||
|
|
||||||
|
|
||||||
def importSubtitles(self, searchDirectory, prefix, season: int = -1, episode: int = -1):
|
def importSubtitles(
|
||||||
|
self,
|
||||||
|
searchDirectory,
|
||||||
|
prefix,
|
||||||
|
season: int = -1,
|
||||||
|
episode: int = -1,
|
||||||
|
preserve_dispositions: bool = False,
|
||||||
|
):
|
||||||
|
|
||||||
# click.echo(f"Season: {season} Episode: {episode}")
|
# click.echo(f"Season: {season} Episode: {episode}")
|
||||||
self.__logger.debug(f"importSubtitles(): Season: {season} Episode: {episode}")
|
self.__logger.debug(f"importSubtitles(): Season: {season} Episode: {episode}")
|
||||||
@@ -482,7 +526,10 @@ class MediaDescriptor:
|
|||||||
d
|
d
|
||||||
for d in availableFileSubtitleDescriptors
|
for d in availableFileSubtitleDescriptors
|
||||||
if ((season == -1 and episode == -1)
|
if ((season == -1 and episode == -1)
|
||||||
or (d["season"] == int(season) and d["episode"] == int(episode)))
|
or (
|
||||||
|
d.get("season") == int(season)
|
||||||
|
and d.get("episode") == int(episode)
|
||||||
|
))
|
||||||
],
|
],
|
||||||
key=lambda d: d["index"],
|
key=lambda d: d["index"],
|
||||||
)
|
)
|
||||||
@@ -497,10 +544,14 @@ class MediaDescriptor:
|
|||||||
if matchingSubtitleTrackDescriptor:
|
if matchingSubtitleTrackDescriptor:
|
||||||
# click.echo(f"Found matching subtitle file {msfd["path"]}\n")
|
# click.echo(f"Found matching subtitle file {msfd["path"]}\n")
|
||||||
self.__logger.debug(f"importSubtitles(): Found matching subtitle file {msfd['path']}")
|
self.__logger.debug(f"importSubtitles(): Found matching subtitle file {msfd['path']}")
|
||||||
matchingSubtitleTrackDescriptor[0].setExternalSourceFilePath(msfd["path"])
|
matchingTrack = matchingSubtitleTrackDescriptor[0]
|
||||||
|
matchingTrack.setExternalSourceFilePath(msfd["path"])
|
||||||
|
|
||||||
# TODO: Check if useful
|
# Prefer metadata coming from the external single-track source when
|
||||||
# matchingSubtitleTrackDescriptor[0].setDispositionSet(msfd["disposition_set"])
|
# it is provided explicitly by the filename contract.
|
||||||
|
matchingTrack.getTags()["language"] = msfd["language"]
|
||||||
|
if msfd["disposition_set"] and not preserve_dispositions:
|
||||||
|
matchingTrack.setDispositionSet(msfd["disposition_set"])
|
||||||
|
|
||||||
|
|
||||||
def getConfiguration(self, label: str = ''):
|
def getConfiguration(self, label: str = ''):
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
import click
|
import click
|
||||||
|
|
||||||
|
from ffx.iso_language import IsoLanguage
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
from ffx.media_descriptor import MediaDescriptor
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
from ffx.track_descriptor import TrackDescriptor
|
||||||
|
|
||||||
@@ -42,6 +43,14 @@ class MediaDescriptorChangeSet():
|
|||||||
|
|
||||||
self.__targetTrackDescriptors = targetMediaDescriptor.getTrackDescriptors() if targetMediaDescriptor is not None else []
|
self.__targetTrackDescriptors = targetMediaDescriptor.getTrackDescriptors() if targetMediaDescriptor is not None else []
|
||||||
self.__sourceTrackDescriptors = sourceMediaDescriptor.getTrackDescriptors() if sourceMediaDescriptor is not None else []
|
self.__sourceTrackDescriptors = sourceMediaDescriptor.getTrackDescriptors() if sourceMediaDescriptor is not None else []
|
||||||
|
self.__targetTrackDescriptorsByIndex = {
|
||||||
|
trackDescriptor.getIndex(): trackDescriptor
|
||||||
|
for trackDescriptor in self.__targetTrackDescriptors
|
||||||
|
}
|
||||||
|
self.__sourceTrackDescriptorsByIndex = {
|
||||||
|
trackDescriptor.getIndex(): trackDescriptor
|
||||||
|
for trackDescriptor in self.__sourceTrackDescriptors
|
||||||
|
}
|
||||||
|
|
||||||
targetMediaTags = targetMediaDescriptor.getTags() if targetMediaDescriptor is not None else {}
|
targetMediaTags = targetMediaDescriptor.getTags() if targetMediaDescriptor is not None else {}
|
||||||
sourceMediaTags = sourceMediaDescriptor.getTags() if sourceMediaDescriptor is not None else {}
|
sourceMediaTags = sourceMediaDescriptor.getTags() if sourceMediaDescriptor is not None else {}
|
||||||
@@ -70,51 +79,34 @@ class MediaDescriptorChangeSet():
|
|||||||
|
|
||||||
self.__numSourceTracks = len(self.__sourceTrackDescriptors)
|
self.__numSourceTracks = len(self.__sourceTrackDescriptors)
|
||||||
|
|
||||||
maxNumOfTracks = max(self.__numSourceTracks, self.__numTargetTracks)
|
|
||||||
|
|
||||||
trackCompareResult = {}
|
trackCompareResult = {}
|
||||||
|
|
||||||
|
for targetTrackDescriptor in self.__targetTrackDescriptors:
|
||||||
|
sourceTrackDescriptor = self.__sourceTrackDescriptorsByIndex.get(
|
||||||
|
targetTrackDescriptor.getSourceIndex()
|
||||||
|
)
|
||||||
|
|
||||||
for trackIndex in range(maxNumOfTracks):
|
if sourceTrackDescriptor is None:
|
||||||
|
|
||||||
correspondingSourceTrackDescriptors = [st for st in self.__sourceTrackDescriptors if st.getIndex() == trackIndex]
|
|
||||||
correspondingTargetTrackDescriptors = [tt for tt in self.__targetTrackDescriptors if tt.getIndex() == trackIndex]
|
|
||||||
|
|
||||||
# Track present in target but not in source
|
|
||||||
if (not correspondingSourceTrackDescriptors
|
|
||||||
and correspondingTargetTrackDescriptors):
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY not in trackCompareResult.keys():
|
if DIFF_ADDED_KEY not in trackCompareResult.keys():
|
||||||
trackCompareResult[DIFF_ADDED_KEY] = {}
|
trackCompareResult[DIFF_ADDED_KEY] = {}
|
||||||
|
trackCompareResult[DIFF_ADDED_KEY][targetTrackDescriptor.getIndex()] = targetTrackDescriptor
|
||||||
trackCompareResult[DIFF_ADDED_KEY][trackIndex] = correspondingTargetTrackDescriptors[0]
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Track present in target but not in source
|
trackDiff = self.compareTracks(targetTrackDescriptor, sourceTrackDescriptor)
|
||||||
if (correspondingSourceTrackDescriptors
|
|
||||||
and not correspondingTargetTrackDescriptors):
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY not in trackCompareResult.keys():
|
|
||||||
trackCompareResult[DIFF_REMOVED_KEY] = {}
|
|
||||||
|
|
||||||
trackCompareResult[DIFF_REMOVED_KEY][trackIndex] = correspondingSourceTrackDescriptors[0]
|
|
||||||
continue
|
|
||||||
|
|
||||||
if (correspondingSourceTrackDescriptors
|
|
||||||
and correspondingTargetTrackDescriptors):
|
|
||||||
|
|
||||||
# if correspondingTargetTrackDescriptors[0].getIndex() == 3:
|
|
||||||
# raise click.ClickException(f"{correspondingSourceTrackDescriptors[0].getDispositionSet()} {correspondingTargetTrackDescriptors[0].getDispositionSet()}")
|
|
||||||
|
|
||||||
|
|
||||||
trackDiff = self.compareTracks(correspondingTargetTrackDescriptors[0],
|
|
||||||
correspondingSourceTrackDescriptors[0])
|
|
||||||
|
|
||||||
if trackDiff:
|
if trackDiff:
|
||||||
if DIFF_CHANGED_KEY not in trackCompareResult.keys():
|
if DIFF_CHANGED_KEY not in trackCompareResult.keys():
|
||||||
trackCompareResult[DIFF_CHANGED_KEY] = {}
|
trackCompareResult[DIFF_CHANGED_KEY] = {}
|
||||||
|
trackCompareResult[DIFF_CHANGED_KEY][targetTrackDescriptor.getIndex()] = trackDiff
|
||||||
|
|
||||||
trackCompareResult[DIFF_CHANGED_KEY][trackIndex] = trackDiff
|
targetSourceIndices = {
|
||||||
|
targetTrackDescriptor.getSourceIndex()
|
||||||
|
for targetTrackDescriptor in self.__targetTrackDescriptors
|
||||||
|
}
|
||||||
|
for sourceTrackDescriptor in self.__sourceTrackDescriptors:
|
||||||
|
if sourceTrackDescriptor.getIndex() not in targetSourceIndices:
|
||||||
|
if DIFF_REMOVED_KEY not in trackCompareResult.keys():
|
||||||
|
trackCompareResult[DIFF_REMOVED_KEY] = {}
|
||||||
|
trackCompareResult[DIFF_REMOVED_KEY][sourceTrackDescriptor.getIndex()] = sourceTrackDescriptor
|
||||||
|
|
||||||
|
|
||||||
if trackCompareResult:
|
if trackCompareResult:
|
||||||
@@ -126,7 +118,11 @@ class MediaDescriptorChangeSet():
|
|||||||
sourceTrackDescriptor: TrackDescriptor = None):
|
sourceTrackDescriptor: TrackDescriptor = None):
|
||||||
|
|
||||||
sourceTrackTags = sourceTrackDescriptor.getTags() if sourceTrackDescriptor is not None else {}
|
sourceTrackTags = sourceTrackDescriptor.getTags() if sourceTrackDescriptor is not None else {}
|
||||||
targetTrackTags = targetTrackDescriptor.getTags() if targetTrackDescriptor is not None else {}
|
targetTrackTags = (
|
||||||
|
self.normalizeTrackTags(targetTrackDescriptor.getTags())
|
||||||
|
if targetTrackDescriptor is not None
|
||||||
|
else {}
|
||||||
|
)
|
||||||
|
|
||||||
trackCompareResult = {}
|
trackCompareResult = {}
|
||||||
|
|
||||||
@@ -151,6 +147,25 @@ class MediaDescriptorChangeSet():
|
|||||||
|
|
||||||
return trackCompareResult
|
return trackCompareResult
|
||||||
|
|
||||||
|
def normalizeTrackTagValue(self, tagKey, tagValue):
|
||||||
|
if tagKey != "language":
|
||||||
|
return tagValue
|
||||||
|
|
||||||
|
if isinstance(tagValue, IsoLanguage):
|
||||||
|
return tagValue.threeLetter()
|
||||||
|
|
||||||
|
trackLanguage = IsoLanguage.findThreeLetter(str(tagValue))
|
||||||
|
if trackLanguage != IsoLanguage.UNDEFINED:
|
||||||
|
return trackLanguage.threeLetter()
|
||||||
|
|
||||||
|
return tagValue
|
||||||
|
|
||||||
|
def normalizeTrackTags(self, trackTags: dict):
|
||||||
|
return {
|
||||||
|
tagKey: self.normalizeTrackTagValue(tagKey, tagValue)
|
||||||
|
for tagKey, tagValue in trackTags.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
def generateDispositionTokens(self):
|
def generateDispositionTokens(self):
|
||||||
"""
|
"""
|
||||||
@@ -252,7 +267,7 @@ class MediaDescriptorChangeSet():
|
|||||||
addedTracks: dict = self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY]
|
addedTracks: dict = self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY]
|
||||||
trackDescriptor: TrackDescriptor
|
trackDescriptor: TrackDescriptor
|
||||||
for trackDescriptor in addedTracks.values():
|
for trackDescriptor in addedTracks.values():
|
||||||
for tagKey, tagValue in trackDescriptor.getTags().items():
|
for tagKey, tagValue in self.normalizeTrackTags(trackDescriptor.getTags()).items():
|
||||||
if not tagKey in self.__removeTrackKeys:
|
if not tagKey in self.__removeTrackKeys:
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
+ f":{trackDescriptor.getSubIndex()}",
|
||||||
@@ -274,29 +289,58 @@ class MediaDescriptorChangeSet():
|
|||||||
|
|
||||||
outputTrackTags = addedTrackTags | changedTrackTags
|
outputTrackTags = addedTrackTags | changedTrackTags
|
||||||
|
|
||||||
trackDescriptor = self.__targetTrackDescriptors[trackIndex]
|
trackDescriptor = self.__targetTrackDescriptorsByIndex[trackIndex]
|
||||||
|
|
||||||
for tagKey, tagValue in outputTrackTags.items():
|
for tagKey, tagValue in self.normalizeTrackTags(outputTrackTags).items():
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
+ f":{trackDescriptor.getSubIndex()}",
|
||||||
f"{tagKey}={tagValue}"]
|
f"{tagKey}={tagValue}"]
|
||||||
|
|
||||||
|
if trackDescriptor.getExternalSourceFilePath():
|
||||||
|
# When a single-track external file substitutes the
|
||||||
|
# media payload, keep metadata from the regular
|
||||||
|
# source track unless the external/target side
|
||||||
|
# overrides it explicitly.
|
||||||
|
preservedTrackTags = (
|
||||||
|
{
|
||||||
|
tagKey: tagValue
|
||||||
|
for tagKey, tagValue in removedTrackTags.items()
|
||||||
|
if tagKey not in self.__removeTrackKeys
|
||||||
|
}
|
||||||
|
| unchangedTrackTags
|
||||||
|
)
|
||||||
|
for tagKey, tagValue in self.normalizeTrackTags(preservedTrackTags).items():
|
||||||
|
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
||||||
|
+ f":{trackDescriptor.getSubIndex()}",
|
||||||
|
f"{tagKey}={tagValue}"]
|
||||||
|
else:
|
||||||
for removeKey in removedTrackTags.keys():
|
for removeKey in removedTrackTags.keys():
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
+ f":{trackDescriptor.getSubIndex()}",
|
||||||
f"{removeKey}="]
|
f"{removeKey}="]
|
||||||
|
|
||||||
#HINT: In case of loading a track from an external file
|
for tagKey, tagValue in self.__context.get('encoding_metadata_tags', {}).items():
|
||||||
# no tags from source are present for the track so
|
metadataTokens += [f"-metadata:g", f"{tagKey}={tagValue}"]
|
||||||
# the unchanged tracks are passed to the output file as well
|
|
||||||
if trackDescriptor.getExternalSourceFilePath():
|
metadataTokens += self.generateConfiguredRemovalMetadataTokens()
|
||||||
for tagKey, tagValue in unchangedTrackTags.items():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
|
||||||
f"{tagKey}={tagValue}"]
|
|
||||||
|
|
||||||
return metadataTokens
|
return metadataTokens
|
||||||
|
|
||||||
|
|
||||||
def getChangeSetObj(self):
|
def getChangeSetObj(self):
|
||||||
return self.__changeSetObj
|
return self.__changeSetObj
|
||||||
|
|
||||||
|
def generateConfiguredRemovalMetadataTokens(self):
|
||||||
|
metadataTokens = []
|
||||||
|
|
||||||
|
for removeKey in self.__removeGlobalKeys:
|
||||||
|
metadataTokens += ["-metadata:g", f"{removeKey}="]
|
||||||
|
|
||||||
|
for trackDescriptor in self.__targetTrackDescriptors:
|
||||||
|
for removeKey in self.__removeTrackKeys:
|
||||||
|
metadataTokens += [
|
||||||
|
f"-metadata:s:{trackDescriptor.getType().indicator()}:{trackDescriptor.getSubIndex()}",
|
||||||
|
f"{removeKey}=",
|
||||||
|
]
|
||||||
|
|
||||||
|
return metadataTokens
|
||||||
|
|||||||
@@ -6,13 +6,9 @@ from textual.containers import Grid
|
|||||||
|
|
||||||
from ffx.audio_layout import AudioLayout
|
from ffx.audio_layout import AudioLayout
|
||||||
|
|
||||||
from .pattern_controller import PatternController
|
|
||||||
from .show_controller import ShowController
|
|
||||||
from .track_controller import TrackController
|
|
||||||
from .tag_controller import TagController
|
|
||||||
|
|
||||||
from .show_details_screen import ShowDetailsScreen
|
from .show_details_screen import ShowDetailsScreen
|
||||||
from .pattern_details_screen import PatternDetailsScreen
|
from .pattern_details_screen import PatternDetailsScreen
|
||||||
|
from .screen_support import build_screen_bootstrap, build_screen_controllers
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
from ffx.track_type import TrackType
|
||||||
from ffx.track_codec import TrackCodec
|
from ffx.track_codec import TrackCodec
|
||||||
@@ -135,29 +131,23 @@ class MediaDetailsScreen(Screen):
|
|||||||
def __init__(self):
|
def __init__(self):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
bootstrap = build_screen_bootstrap(self.app.getContext())
|
||||||
self.Session = self.context['database']['session'] # convenience
|
self.context = bootstrap.context
|
||||||
|
|
||||||
|
self.__removeGlobalKeys = bootstrap.remove_global_keys
|
||||||
|
self.__ignoreGlobalKeys = bootstrap.ignore_global_keys
|
||||||
|
|
||||||
self.__configurationData = self.context['config'].getData()
|
controllers = build_screen_controllers(
|
||||||
|
self.context,
|
||||||
metadataConfiguration = self.__configurationData['metadata'] if 'metadata' in self.__configurationData.keys() else {}
|
pattern=True,
|
||||||
|
show=True,
|
||||||
self.__signatureTags = metadataConfiguration['signature'] if 'signature' in metadataConfiguration.keys() else {}
|
track=True,
|
||||||
self.__removeGlobalKeys = metadataConfiguration['remove'] if 'remove' in metadataConfiguration.keys() else []
|
tag=True,
|
||||||
self.__ignoreGlobalKeys = metadataConfiguration['ignore'] if 'ignore' in metadataConfiguration.keys() else []
|
)
|
||||||
self.__removeTrackKeys = (metadataConfiguration['streams']['remove']
|
self.__pc = controllers['pattern']
|
||||||
if 'streams' in metadataConfiguration.keys()
|
self.__sc = controllers['show']
|
||||||
and 'remove' in metadataConfiguration['streams'].keys() else [])
|
self.__tc = controllers['track']
|
||||||
self.__ignoreTrackKeys = (metadataConfiguration['streams']['ignore']
|
self.__tac = controllers['tag']
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'ignore' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
|
|
||||||
|
|
||||||
self.__pc = PatternController(context = self.context)
|
|
||||||
self.__sc = ShowController(context = self.context)
|
|
||||||
self.__tc = TrackController(context = self.context)
|
|
||||||
self.__tac = TagController(context = self.context)
|
|
||||||
|
|
||||||
if not 'command' in self.context.keys() or self.context['command'] != 'inspect':
|
if not 'command' in self.context.keys() or self.context['command'] != 'inspect':
|
||||||
raise click.ClickException(f"MediaDetailsScreen.__init__(): Can only perform command 'inspect'")
|
raise click.ClickException(f"MediaDetailsScreen.__init__(): Can only perform command 'inspect'")
|
||||||
@@ -569,6 +559,7 @@ class MediaDetailsScreen(Screen):
|
|||||||
try:
|
try:
|
||||||
kwargs = {}
|
kwargs = {}
|
||||||
|
|
||||||
|
kwargs[ShowDescriptor.CONTEXT_KEY] = self.context
|
||||||
kwargs[ShowDescriptor.ID_KEY] = int(selected_row_data[0])
|
kwargs[ShowDescriptor.ID_KEY] = int(selected_row_data[0])
|
||||||
kwargs[ShowDescriptor.NAME_KEY] = str(selected_row_data[1])
|
kwargs[ShowDescriptor.NAME_KEY] = str(selected_row_data[1])
|
||||||
kwargs[ShowDescriptor.YEAR_KEY] = int(selected_row_data[2])
|
kwargs[ShowDescriptor.YEAR_KEY] = int(selected_row_data[2])
|
||||||
@@ -602,19 +593,20 @@ class MediaDetailsScreen(Screen):
|
|||||||
patternObj = self.getPatternObjFromInput()
|
patternObj = self.getPatternObjFromInput()
|
||||||
|
|
||||||
if patternObj:
|
if patternObj:
|
||||||
patternId = self.__pc.addPattern(patternObj)
|
mediaTags = {}
|
||||||
if patternId:
|
|
||||||
self.highlightPattern(False)
|
|
||||||
|
|
||||||
for tagKey, tagValue in self.__sourceMediaDescriptor.getTags().items():
|
for tagKey, tagValue in self.__sourceMediaDescriptor.getTags().items():
|
||||||
|
|
||||||
# Filter tags that make no sense to preserve
|
# Filter tags that make no sense to preserve
|
||||||
if tagKey not in self.__ignoreGlobalKeys and not tagKey in self.__removeGlobalKeys:
|
if tagKey not in self.__ignoreGlobalKeys and not tagKey in self.__removeGlobalKeys:
|
||||||
self.__tac.updateMediaTag(patternId, tagKey, tagValue)
|
mediaTags[tagKey] = tagValue
|
||||||
|
|
||||||
# for trackDescriptor in self.__sourceMediaDescriptor.getAllTrackDescriptors():
|
patternId = self.__pc.savePatternSchema(
|
||||||
for trackDescriptor in self.__sourceMediaDescriptor.getTrackDescriptors():
|
patternObj,
|
||||||
self.__tc.addTrack(trackDescriptor, patternId = patternId)
|
trackDescriptors=self.__sourceMediaDescriptor.getTrackDescriptors(),
|
||||||
|
mediaTags=mediaTags,
|
||||||
|
)
|
||||||
|
if patternId:
|
||||||
|
self.highlightPattern(False)
|
||||||
|
|
||||||
|
|
||||||
def action_new_pattern(self):
|
def action_new_pattern(self):
|
||||||
@@ -754,4 +746,3 @@ class MediaDetailsScreen(Screen):
|
|||||||
def handle_edit_pattern(self, screenResult):
|
def handle_edit_pattern(self, screenResult):
|
||||||
self.query_one("#pattern_input", Input).value = screenResult['pattern']
|
self.query_one("#pattern_input", Input).value = screenResult['pattern']
|
||||||
self.updateDifferences()
|
self.updateDifferences()
|
||||||
|
|
||||||
|
|||||||
@@ -0,0 +1,20 @@
|
|||||||
|
"""Load ORM model modules so SQLAlchemy relationship strings can resolve."""
|
||||||
|
|
||||||
|
from .show import Base, Show
|
||||||
|
from .pattern import Pattern
|
||||||
|
from .track import Track
|
||||||
|
from .track_tag import TrackTag
|
||||||
|
from .media_tag import MediaTag
|
||||||
|
from .shifted_season import ShiftedSeason
|
||||||
|
from .property import Property
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
'Base',
|
||||||
|
'Show',
|
||||||
|
'Pattern',
|
||||||
|
'Track',
|
||||||
|
'TrackTag',
|
||||||
|
'MediaTag',
|
||||||
|
'ShiftedSeason',
|
||||||
|
'Property',
|
||||||
|
]
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user