Compare commits
1 Commits
v0.2.4
...
dd8f472ac5
| Author | SHA1 | Date | |
|---|---|---|---|
| dd8f472ac5 |
23
.gitignore
vendored
23
.gitignore
vendored
@@ -1,23 +0,0 @@
|
|||||||
__pycache__/
|
|
||||||
*.py[cod]
|
|
||||||
junk/
|
|
||||||
.vscode
|
|
||||||
.ipynb_checkpoints/
|
|
||||||
tools/ansible/inventory/hawaii.yml
|
|
||||||
tools/ansible/inventory/peppermint.yml
|
|
||||||
tools/ansible/inventory/cappuccino.yml
|
|
||||||
tools/ansible/inventory/group_vars/all.yml
|
|
||||||
ffx_test_report.log
|
|
||||||
bin/conversiontest.py
|
|
||||||
|
|
||||||
build/
|
|
||||||
dist/
|
|
||||||
*.egg-info/
|
|
||||||
.venv/
|
|
||||||
venv/
|
|
||||||
.codex
|
|
||||||
|
|
||||||
|
|
||||||
*.mkv
|
|
||||||
*.webm
|
|
||||||
ffmpeg2pass-0.log
|
|
||||||
595
LICENSE.md
595
LICENSE.md
@@ -1,595 +0,0 @@
|
|||||||
GNU General Public License
|
|
||||||
==========================
|
|
||||||
|
|
||||||
_Version 3, 29 June 2007_
|
|
||||||
_Copyright © 2007 Free Software Foundation, Inc. <<http://fsf.org/>>_
|
|
||||||
|
|
||||||
Everyone is permitted to copy and distribute verbatim copies of this license
|
|
||||||
document, but changing it is not allowed.
|
|
||||||
|
|
||||||
## Preamble
|
|
||||||
|
|
||||||
The GNU General Public License is a free, copyleft license for software and other
|
|
||||||
kinds of works.
|
|
||||||
|
|
||||||
The licenses for most software and other practical works are designed to take away
|
|
||||||
your freedom to share and change the works. By contrast, the GNU General Public
|
|
||||||
License is intended to guarantee your freedom to share and change all versions of a
|
|
||||||
program--to make sure it remains free software for all its users. We, the Free
|
|
||||||
Software Foundation, use the GNU General Public License for most of our software; it
|
|
||||||
applies also to any other work released this way by its authors. You can apply it to
|
|
||||||
your programs, too.
|
|
||||||
|
|
||||||
When we speak of free software, we are referring to freedom, not price. Our General
|
|
||||||
Public Licenses are designed to make sure that you have the freedom to distribute
|
|
||||||
copies of free software (and charge for them if you wish), that you receive source
|
|
||||||
code or can get it if you want it, that you can change the software or use pieces of
|
|
||||||
it in new free programs, and that you know you can do these things.
|
|
||||||
|
|
||||||
To protect your rights, we need to prevent others from denying you these rights or
|
|
||||||
asking you to surrender the rights. Therefore, you have certain responsibilities if
|
|
||||||
you distribute copies of the software, or if you modify it: responsibilities to
|
|
||||||
respect the freedom of others.
|
|
||||||
|
|
||||||
For example, if you distribute copies of such a program, whether gratis or for a fee,
|
|
||||||
you must pass on to the recipients the same freedoms that you received. You must make
|
|
||||||
sure that they, too, receive or can get the source code. And you must show them these
|
|
||||||
terms so they know their rights.
|
|
||||||
|
|
||||||
Developers that use the GNU GPL protect your rights with two steps: **(1)** assert
|
|
||||||
copyright on the software, and **(2)** offer you this License giving you legal permission
|
|
||||||
to copy, distribute and/or modify it.
|
|
||||||
|
|
||||||
For the developers' and authors' protection, the GPL clearly explains that there is
|
|
||||||
no warranty for this free software. For both users' and authors' sake, the GPL
|
|
||||||
requires that modified versions be marked as changed, so that their problems will not
|
|
||||||
be attributed erroneously to authors of previous versions.
|
|
||||||
|
|
||||||
Some devices are designed to deny users access to install or run modified versions of
|
|
||||||
the software inside them, although the manufacturer can do so. This is fundamentally
|
|
||||||
incompatible with the aim of protecting users' freedom to change the software. The
|
|
||||||
systematic pattern of such abuse occurs in the area of products for individuals to
|
|
||||||
use, which is precisely where it is most unacceptable. Therefore, we have designed
|
|
||||||
this version of the GPL to prohibit the practice for those products. If such problems
|
|
||||||
arise substantially in other domains, we stand ready to extend this provision to
|
|
||||||
those domains in future versions of the GPL, as needed to protect the freedom of
|
|
||||||
users.
|
|
||||||
|
|
||||||
Finally, every program is threatened constantly by software patents. States should
|
|
||||||
not allow patents to restrict development and use of software on general-purpose
|
|
||||||
computers, but in those that do, we wish to avoid the special danger that patents
|
|
||||||
applied to a free program could make it effectively proprietary. To prevent this, the
|
|
||||||
GPL assures that patents cannot be used to render the program non-free.
|
|
||||||
|
|
||||||
The precise terms and conditions for copying, distribution and modification follow.
|
|
||||||
|
|
||||||
## TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
### 0. Definitions
|
|
||||||
|
|
||||||
“This License” refers to version 3 of the GNU General Public License.
|
|
||||||
|
|
||||||
“Copyright” also means copyright-like laws that apply to other kinds of
|
|
||||||
works, such as semiconductor masks.
|
|
||||||
|
|
||||||
“The Program” refers to any copyrightable work licensed under this
|
|
||||||
License. Each licensee is addressed as “you”. “Licensees” and
|
|
||||||
“recipients” may be individuals or organizations.
|
|
||||||
|
|
||||||
To “modify” a work means to copy from or adapt all or part of the work in
|
|
||||||
a fashion requiring copyright permission, other than the making of an exact copy. The
|
|
||||||
resulting work is called a “modified version” of the earlier work or a
|
|
||||||
work “based on” the earlier work.
|
|
||||||
|
|
||||||
A “covered work” means either the unmodified Program or a work based on
|
|
||||||
the Program.
|
|
||||||
|
|
||||||
To “propagate” a work means to do anything with it that, without
|
|
||||||
permission, would make you directly or secondarily liable for infringement under
|
|
||||||
applicable copyright law, except executing it on a computer or modifying a private
|
|
||||||
copy. Propagation includes copying, distribution (with or without modification),
|
|
||||||
making available to the public, and in some countries other activities as well.
|
|
||||||
|
|
||||||
To “convey” a work means any kind of propagation that enables other
|
|
||||||
parties to make or receive copies. Mere interaction with a user through a computer
|
|
||||||
network, with no transfer of a copy, is not conveying.
|
|
||||||
|
|
||||||
An interactive user interface displays “Appropriate Legal Notices” to the
|
|
||||||
extent that it includes a convenient and prominently visible feature that **(1)**
|
|
||||||
displays an appropriate copyright notice, and **(2)** tells the user that there is no
|
|
||||||
warranty for the work (except to the extent that warranties are provided), that
|
|
||||||
licensees may convey the work under this License, and how to view a copy of this
|
|
||||||
License. If the interface presents a list of user commands or options, such as a
|
|
||||||
menu, a prominent item in the list meets this criterion.
|
|
||||||
|
|
||||||
### 1. Source Code
|
|
||||||
|
|
||||||
The “source code” for a work means the preferred form of the work for
|
|
||||||
making modifications to it. “Object code” means any non-source form of a
|
|
||||||
work.
|
|
||||||
|
|
||||||
A “Standard Interface” means an interface that either is an official
|
|
||||||
standard defined by a recognized standards body, or, in the case of interfaces
|
|
||||||
specified for a particular programming language, one that is widely used among
|
|
||||||
developers working in that language.
|
|
||||||
|
|
||||||
The “System Libraries” of an executable work include anything, other than
|
|
||||||
the work as a whole, that **(a)** is included in the normal form of packaging a Major
|
|
||||||
Component, but which is not part of that Major Component, and **(b)** serves only to
|
|
||||||
enable use of the work with that Major Component, or to implement a Standard
|
|
||||||
Interface for which an implementation is available to the public in source code form.
|
|
||||||
A “Major Component”, in this context, means a major essential component
|
|
||||||
(kernel, window system, and so on) of the specific operating system (if any) on which
|
|
||||||
the executable work runs, or a compiler used to produce the work, or an object code
|
|
||||||
interpreter used to run it.
|
|
||||||
|
|
||||||
The “Corresponding Source” for a work in object code form means all the
|
|
||||||
source code needed to generate, install, and (for an executable work) run the object
|
|
||||||
code and to modify the work, including scripts to control those activities. However,
|
|
||||||
it does not include the work's System Libraries, or general-purpose tools or
|
|
||||||
generally available free programs which are used unmodified in performing those
|
|
||||||
activities but which are not part of the work. For example, Corresponding Source
|
|
||||||
includes interface definition files associated with source files for the work, and
|
|
||||||
the source code for shared libraries and dynamically linked subprograms that the work
|
|
||||||
is specifically designed to require, such as by intimate data communication or
|
|
||||||
control flow between those subprograms and other parts of the work.
|
|
||||||
|
|
||||||
The Corresponding Source need not include anything that users can regenerate
|
|
||||||
automatically from other parts of the Corresponding Source.
|
|
||||||
|
|
||||||
The Corresponding Source for a work in source code form is that same work.
|
|
||||||
|
|
||||||
### 2. Basic Permissions
|
|
||||||
|
|
||||||
All rights granted under this License are granted for the term of copyright on the
|
|
||||||
Program, and are irrevocable provided the stated conditions are met. This License
|
|
||||||
explicitly affirms your unlimited permission to run the unmodified Program. The
|
|
||||||
output from running a covered work is covered by this License only if the output,
|
|
||||||
given its content, constitutes a covered work. This License acknowledges your rights
|
|
||||||
of fair use or other equivalent, as provided by copyright law.
|
|
||||||
|
|
||||||
You may make, run and propagate covered works that you do not convey, without
|
|
||||||
conditions so long as your license otherwise remains in force. You may convey covered
|
|
||||||
works to others for the sole purpose of having them make modifications exclusively
|
|
||||||
for you, or provide you with facilities for running those works, provided that you
|
|
||||||
comply with the terms of this License in conveying all material for which you do not
|
|
||||||
control copyright. Those thus making or running the covered works for you must do so
|
|
||||||
exclusively on your behalf, under your direction and control, on terms that prohibit
|
|
||||||
them from making any copies of your copyrighted material outside their relationship
|
|
||||||
with you.
|
|
||||||
|
|
||||||
Conveying under any other circumstances is permitted solely under the conditions
|
|
||||||
stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
|
|
||||||
|
|
||||||
### 3. Protecting Users' Legal Rights From Anti-Circumvention Law
|
|
||||||
|
|
||||||
No covered work shall be deemed part of an effective technological measure under any
|
|
||||||
applicable law fulfilling obligations under article 11 of the WIPO copyright treaty
|
|
||||||
adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention
|
|
||||||
of such measures.
|
|
||||||
|
|
||||||
When you convey a covered work, you waive any legal power to forbid circumvention of
|
|
||||||
technological measures to the extent such circumvention is effected by exercising
|
|
||||||
rights under this License with respect to the covered work, and you disclaim any
|
|
||||||
intention to limit operation or modification of the work as a means of enforcing,
|
|
||||||
against the work's users, your or third parties' legal rights to forbid circumvention
|
|
||||||
of technological measures.
|
|
||||||
|
|
||||||
### 4. Conveying Verbatim Copies
|
|
||||||
|
|
||||||
You may convey verbatim copies of the Program's source code as you receive it, in any
|
|
||||||
medium, provided that you conspicuously and appropriately publish on each copy an
|
|
||||||
appropriate copyright notice; keep intact all notices stating that this License and
|
|
||||||
any non-permissive terms added in accord with section 7 apply to the code; keep
|
|
||||||
intact all notices of the absence of any warranty; and give all recipients a copy of
|
|
||||||
this License along with the Program.
|
|
||||||
|
|
||||||
You may charge any price or no price for each copy that you convey, and you may offer
|
|
||||||
support or warranty protection for a fee.
|
|
||||||
|
|
||||||
### 5. Conveying Modified Source Versions
|
|
||||||
|
|
||||||
You may convey a work based on the Program, or the modifications to produce it from
|
|
||||||
the Program, in the form of source code under the terms of section 4, provided that
|
|
||||||
you also meet all of these conditions:
|
|
||||||
|
|
||||||
* **a)** The work must carry prominent notices stating that you modified it, and giving a
|
|
||||||
relevant date.
|
|
||||||
* **b)** The work must carry prominent notices stating that it is released under this
|
|
||||||
License and any conditions added under section 7. This requirement modifies the
|
|
||||||
requirement in section 4 to “keep intact all notices”.
|
|
||||||
* **c)** You must license the entire work, as a whole, under this License to anyone who
|
|
||||||
comes into possession of a copy. This License will therefore apply, along with any
|
|
||||||
applicable section 7 additional terms, to the whole of the work, and all its parts,
|
|
||||||
regardless of how they are packaged. This License gives no permission to license the
|
|
||||||
work in any other way, but it does not invalidate such permission if you have
|
|
||||||
separately received it.
|
|
||||||
* **d)** If the work has interactive user interfaces, each must display Appropriate Legal
|
|
||||||
Notices; however, if the Program has interactive interfaces that do not display
|
|
||||||
Appropriate Legal Notices, your work need not make them do so.
|
|
||||||
|
|
||||||
A compilation of a covered work with other separate and independent works, which are
|
|
||||||
not by their nature extensions of the covered work, and which are not combined with
|
|
||||||
it such as to form a larger program, in or on a volume of a storage or distribution
|
|
||||||
medium, is called an “aggregate” if the compilation and its resulting
|
|
||||||
copyright are not used to limit the access or legal rights of the compilation's users
|
|
||||||
beyond what the individual works permit. Inclusion of a covered work in an aggregate
|
|
||||||
does not cause this License to apply to the other parts of the aggregate.
|
|
||||||
|
|
||||||
### 6. Conveying Non-Source Forms
|
|
||||||
|
|
||||||
You may convey a covered work in object code form under the terms of sections 4 and
|
|
||||||
5, provided that you also convey the machine-readable Corresponding Source under the
|
|
||||||
terms of this License, in one of these ways:
|
|
||||||
|
|
||||||
* **a)** Convey the object code in, or embodied in, a physical product (including a
|
|
||||||
physical distribution medium), accompanied by the Corresponding Source fixed on a
|
|
||||||
durable physical medium customarily used for software interchange.
|
|
||||||
* **b)** Convey the object code in, or embodied in, a physical product (including a
|
|
||||||
physical distribution medium), accompanied by a written offer, valid for at least
|
|
||||||
three years and valid for as long as you offer spare parts or customer support for
|
|
||||||
that product model, to give anyone who possesses the object code either **(1)** a copy of
|
|
||||||
the Corresponding Source for all the software in the product that is covered by this
|
|
||||||
License, on a durable physical medium customarily used for software interchange, for
|
|
||||||
a price no more than your reasonable cost of physically performing this conveying of
|
|
||||||
source, or **(2)** access to copy the Corresponding Source from a network server at no
|
|
||||||
charge.
|
|
||||||
* **c)** Convey individual copies of the object code with a copy of the written offer to
|
|
||||||
provide the Corresponding Source. This alternative is allowed only occasionally and
|
|
||||||
noncommercially, and only if you received the object code with such an offer, in
|
|
||||||
accord with subsection 6b.
|
|
||||||
* **d)** Convey the object code by offering access from a designated place (gratis or for
|
|
||||||
a charge), and offer equivalent access to the Corresponding Source in the same way
|
|
||||||
through the same place at no further charge. You need not require recipients to copy
|
|
||||||
the Corresponding Source along with the object code. If the place to copy the object
|
|
||||||
code is a network server, the Corresponding Source may be on a different server
|
|
||||||
(operated by you or a third party) that supports equivalent copying facilities,
|
|
||||||
provided you maintain clear directions next to the object code saying where to find
|
|
||||||
the Corresponding Source. Regardless of what server hosts the Corresponding Source,
|
|
||||||
you remain obligated to ensure that it is available for as long as needed to satisfy
|
|
||||||
these requirements.
|
|
||||||
* **e)** Convey the object code using peer-to-peer transmission, provided you inform
|
|
||||||
other peers where the object code and Corresponding Source of the work are being
|
|
||||||
offered to the general public at no charge under subsection 6d.
|
|
||||||
|
|
||||||
A separable portion of the object code, whose source code is excluded from the
|
|
||||||
Corresponding Source as a System Library, need not be included in conveying the
|
|
||||||
object code work.
|
|
||||||
|
|
||||||
A “User Product” is either **(1)** a “consumer product”, which
|
|
||||||
means any tangible personal property which is normally used for personal, family, or
|
|
||||||
household purposes, or **(2)** anything designed or sold for incorporation into a
|
|
||||||
dwelling. In determining whether a product is a consumer product, doubtful cases
|
|
||||||
shall be resolved in favor of coverage. For a particular product received by a
|
|
||||||
particular user, “normally used” refers to a typical or common use of
|
|
||||||
that class of product, regardless of the status of the particular user or of the way
|
|
||||||
in which the particular user actually uses, or expects or is expected to use, the
|
|
||||||
product. A product is a consumer product regardless of whether the product has
|
|
||||||
substantial commercial, industrial or non-consumer uses, unless such uses represent
|
|
||||||
the only significant mode of use of the product.
|
|
||||||
|
|
||||||
“Installation Information” for a User Product means any methods,
|
|
||||||
procedures, authorization keys, or other information required to install and execute
|
|
||||||
modified versions of a covered work in that User Product from a modified version of
|
|
||||||
its Corresponding Source. The information must suffice to ensure that the continued
|
|
||||||
functioning of the modified object code is in no case prevented or interfered with
|
|
||||||
solely because modification has been made.
|
|
||||||
|
|
||||||
If you convey an object code work under this section in, or with, or specifically for
|
|
||||||
use in, a User Product, and the conveying occurs as part of a transaction in which
|
|
||||||
the right of possession and use of the User Product is transferred to the recipient
|
|
||||||
in perpetuity or for a fixed term (regardless of how the transaction is
|
|
||||||
characterized), the Corresponding Source conveyed under this section must be
|
|
||||||
accompanied by the Installation Information. But this requirement does not apply if
|
|
||||||
neither you nor any third party retains the ability to install modified object code
|
|
||||||
on the User Product (for example, the work has been installed in ROM).
|
|
||||||
|
|
||||||
The requirement to provide Installation Information does not include a requirement to
|
|
||||||
continue to provide support service, warranty, or updates for a work that has been
|
|
||||||
modified or installed by the recipient, or for the User Product in which it has been
|
|
||||||
modified or installed. Access to a network may be denied when the modification itself
|
|
||||||
materially and adversely affects the operation of the network or violates the rules
|
|
||||||
and protocols for communication across the network.
|
|
||||||
|
|
||||||
Corresponding Source conveyed, and Installation Information provided, in accord with
|
|
||||||
this section must be in a format that is publicly documented (and with an
|
|
||||||
implementation available to the public in source code form), and must require no
|
|
||||||
special password or key for unpacking, reading or copying.
|
|
||||||
|
|
||||||
### 7. Additional Terms
|
|
||||||
|
|
||||||
“Additional permissions” are terms that supplement the terms of this
|
|
||||||
License by making exceptions from one or more of its conditions. Additional
|
|
||||||
permissions that are applicable to the entire Program shall be treated as though they
|
|
||||||
were included in this License, to the extent that they are valid under applicable
|
|
||||||
law. If additional permissions apply only to part of the Program, that part may be
|
|
||||||
used separately under those permissions, but the entire Program remains governed by
|
|
||||||
this License without regard to the additional permissions.
|
|
||||||
|
|
||||||
When you convey a copy of a covered work, you may at your option remove any
|
|
||||||
additional permissions from that copy, or from any part of it. (Additional
|
|
||||||
permissions may be written to require their own removal in certain cases when you
|
|
||||||
modify the work.) You may place additional permissions on material, added by you to a
|
|
||||||
covered work, for which you have or can give appropriate copyright permission.
|
|
||||||
|
|
||||||
Notwithstanding any other provision of this License, for material you add to a
|
|
||||||
covered work, you may (if authorized by the copyright holders of that material)
|
|
||||||
supplement the terms of this License with terms:
|
|
||||||
|
|
||||||
* **a)** Disclaiming warranty or limiting liability differently from the terms of
|
|
||||||
sections 15 and 16 of this License; or
|
|
||||||
* **b)** Requiring preservation of specified reasonable legal notices or author
|
|
||||||
attributions in that material or in the Appropriate Legal Notices displayed by works
|
|
||||||
containing it; or
|
|
||||||
* **c)** Prohibiting misrepresentation of the origin of that material, or requiring that
|
|
||||||
modified versions of such material be marked in reasonable ways as different from the
|
|
||||||
original version; or
|
|
||||||
* **d)** Limiting the use for publicity purposes of names of licensors or authors of the
|
|
||||||
material; or
|
|
||||||
* **e)** Declining to grant rights under trademark law for use of some trade names,
|
|
||||||
trademarks, or service marks; or
|
|
||||||
* **f)** Requiring indemnification of licensors and authors of that material by anyone
|
|
||||||
who conveys the material (or modified versions of it) with contractual assumptions of
|
|
||||||
liability to the recipient, for any liability that these contractual assumptions
|
|
||||||
directly impose on those licensors and authors.
|
|
||||||
|
|
||||||
All other non-permissive additional terms are considered “further
|
|
||||||
restrictions” within the meaning of section 10. If the Program as you received
|
|
||||||
it, or any part of it, contains a notice stating that it is governed by this License
|
|
||||||
along with a term that is a further restriction, you may remove that term. If a
|
|
||||||
license document contains a further restriction but permits relicensing or conveying
|
|
||||||
under this License, you may add to a covered work material governed by the terms of
|
|
||||||
that license document, provided that the further restriction does not survive such
|
|
||||||
relicensing or conveying.
|
|
||||||
|
|
||||||
If you add terms to a covered work in accord with this section, you must place, in
|
|
||||||
the relevant source files, a statement of the additional terms that apply to those
|
|
||||||
files, or a notice indicating where to find the applicable terms.
|
|
||||||
|
|
||||||
Additional terms, permissive or non-permissive, may be stated in the form of a
|
|
||||||
separately written license, or stated as exceptions; the above requirements apply
|
|
||||||
either way.
|
|
||||||
|
|
||||||
### 8. Termination
|
|
||||||
|
|
||||||
You may not propagate or modify a covered work except as expressly provided under
|
|
||||||
this License. Any attempt otherwise to propagate or modify it is void, and will
|
|
||||||
automatically terminate your rights under this License (including any patent licenses
|
|
||||||
granted under the third paragraph of section 11).
|
|
||||||
|
|
||||||
However, if you cease all violation of this License, then your license from a
|
|
||||||
particular copyright holder is reinstated **(a)** provisionally, unless and until the
|
|
||||||
copyright holder explicitly and finally terminates your license, and **(b)** permanently,
|
|
||||||
if the copyright holder fails to notify you of the violation by some reasonable means
|
|
||||||
prior to 60 days after the cessation.
|
|
||||||
|
|
||||||
Moreover, your license from a particular copyright holder is reinstated permanently
|
|
||||||
if the copyright holder notifies you of the violation by some reasonable means, this
|
|
||||||
is the first time you have received notice of violation of this License (for any
|
|
||||||
work) from that copyright holder, and you cure the violation prior to 30 days after
|
|
||||||
your receipt of the notice.
|
|
||||||
|
|
||||||
Termination of your rights under this section does not terminate the licenses of
|
|
||||||
parties who have received copies or rights from you under this License. If your
|
|
||||||
rights have been terminated and not permanently reinstated, you do not qualify to
|
|
||||||
receive new licenses for the same material under section 10.
|
|
||||||
|
|
||||||
### 9. Acceptance Not Required for Having Copies
|
|
||||||
|
|
||||||
You are not required to accept this License in order to receive or run a copy of the
|
|
||||||
Program. Ancillary propagation of a covered work occurring solely as a consequence of
|
|
||||||
using peer-to-peer transmission to receive a copy likewise does not require
|
|
||||||
acceptance. However, nothing other than this License grants you permission to
|
|
||||||
propagate or modify any covered work. These actions infringe copyright if you do not
|
|
||||||
accept this License. Therefore, by modifying or propagating a covered work, you
|
|
||||||
indicate your acceptance of this License to do so.
|
|
||||||
|
|
||||||
### 10. Automatic Licensing of Downstream Recipients
|
|
||||||
|
|
||||||
Each time you convey a covered work, the recipient automatically receives a license
|
|
||||||
from the original licensors, to run, modify and propagate that work, subject to this
|
|
||||||
License. You are not responsible for enforcing compliance by third parties with this
|
|
||||||
License.
|
|
||||||
|
|
||||||
An “entity transaction” is a transaction transferring control of an
|
|
||||||
organization, or substantially all assets of one, or subdividing an organization, or
|
|
||||||
merging organizations. If propagation of a covered work results from an entity
|
|
||||||
transaction, each party to that transaction who receives a copy of the work also
|
|
||||||
receives whatever licenses to the work the party's predecessor in interest had or
|
|
||||||
could give under the previous paragraph, plus a right to possession of the
|
|
||||||
Corresponding Source of the work from the predecessor in interest, if the predecessor
|
|
||||||
has it or can get it with reasonable efforts.
|
|
||||||
|
|
||||||
You may not impose any further restrictions on the exercise of the rights granted or
|
|
||||||
affirmed under this License. For example, you may not impose a license fee, royalty,
|
|
||||||
or other charge for exercise of rights granted under this License, and you may not
|
|
||||||
initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging
|
|
||||||
that any patent claim is infringed by making, using, selling, offering for sale, or
|
|
||||||
importing the Program or any portion of it.
|
|
||||||
|
|
||||||
### 11. Patents
|
|
||||||
|
|
||||||
A “contributor” is a copyright holder who authorizes use under this
|
|
||||||
License of the Program or a work on which the Program is based. The work thus
|
|
||||||
licensed is called the contributor's “contributor version”.
|
|
||||||
|
|
||||||
A contributor's “essential patent claims” are all patent claims owned or
|
|
||||||
controlled by the contributor, whether already acquired or hereafter acquired, that
|
|
||||||
would be infringed by some manner, permitted by this License, of making, using, or
|
|
||||||
selling its contributor version, but do not include claims that would be infringed
|
|
||||||
only as a consequence of further modification of the contributor version. For
|
|
||||||
purposes of this definition, “control” includes the right to grant patent
|
|
||||||
sublicenses in a manner consistent with the requirements of this License.
|
|
||||||
|
|
||||||
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license
|
|
||||||
under the contributor's essential patent claims, to make, use, sell, offer for sale,
|
|
||||||
import and otherwise run, modify and propagate the contents of its contributor
|
|
||||||
version.
|
|
||||||
|
|
||||||
In the following three paragraphs, a “patent license” is any express
|
|
||||||
agreement or commitment, however denominated, not to enforce a patent (such as an
|
|
||||||
express permission to practice a patent or covenant not to sue for patent
|
|
||||||
infringement). To “grant” such a patent license to a party means to make
|
|
||||||
such an agreement or commitment not to enforce a patent against the party.
|
|
||||||
|
|
||||||
If you convey a covered work, knowingly relying on a patent license, and the
|
|
||||||
Corresponding Source of the work is not available for anyone to copy, free of charge
|
|
||||||
and under the terms of this License, through a publicly available network server or
|
|
||||||
other readily accessible means, then you must either **(1)** cause the Corresponding
|
|
||||||
Source to be so available, or **(2)** arrange to deprive yourself of the benefit of the
|
|
||||||
patent license for this particular work, or **(3)** arrange, in a manner consistent with
|
|
||||||
the requirements of this License, to extend the patent license to downstream
|
|
||||||
recipients. “Knowingly relying” means you have actual knowledge that, but
|
|
||||||
for the patent license, your conveying the covered work in a country, or your
|
|
||||||
recipient's use of the covered work in a country, would infringe one or more
|
|
||||||
identifiable patents in that country that you have reason to believe are valid.
|
|
||||||
|
|
||||||
If, pursuant to or in connection with a single transaction or arrangement, you
|
|
||||||
convey, or propagate by procuring conveyance of, a covered work, and grant a patent
|
|
||||||
license to some of the parties receiving the covered work authorizing them to use,
|
|
||||||
propagate, modify or convey a specific copy of the covered work, then the patent
|
|
||||||
license you grant is automatically extended to all recipients of the covered work and
|
|
||||||
works based on it.
|
|
||||||
|
|
||||||
A patent license is “discriminatory” if it does not include within the
|
|
||||||
scope of its coverage, prohibits the exercise of, or is conditioned on the
|
|
||||||
non-exercise of one or more of the rights that are specifically granted under this
|
|
||||||
License. You may not convey a covered work if you are a party to an arrangement with
|
|
||||||
a third party that is in the business of distributing software, under which you make
|
|
||||||
payment to the third party based on the extent of your activity of conveying the
|
|
||||||
work, and under which the third party grants, to any of the parties who would receive
|
|
||||||
the covered work from you, a discriminatory patent license **(a)** in connection with
|
|
||||||
copies of the covered work conveyed by you (or copies made from those copies), or **(b)**
|
|
||||||
primarily for and in connection with specific products or compilations that contain
|
|
||||||
the covered work, unless you entered into that arrangement, or that patent license
|
|
||||||
was granted, prior to 28 March 2007.
|
|
||||||
|
|
||||||
Nothing in this License shall be construed as excluding or limiting any implied
|
|
||||||
license or other defenses to infringement that may otherwise be available to you
|
|
||||||
under applicable patent law.
|
|
||||||
|
|
||||||
### 12. No Surrender of Others' Freedom
|
|
||||||
|
|
||||||
If conditions are imposed on you (whether by court order, agreement or otherwise)
|
|
||||||
that contradict the conditions of this License, they do not excuse you from the
|
|
||||||
conditions of this License. If you cannot convey a covered work so as to satisfy
|
|
||||||
simultaneously your obligations under this License and any other pertinent
|
|
||||||
obligations, then as a consequence you may not convey it at all. For example, if you
|
|
||||||
agree to terms that obligate you to collect a royalty for further conveying from
|
|
||||||
those to whom you convey the Program, the only way you could satisfy both those terms
|
|
||||||
and this License would be to refrain entirely from conveying the Program.
|
|
||||||
|
|
||||||
### 13. Use with the GNU Affero General Public License
|
|
||||||
|
|
||||||
Notwithstanding any other provision of this License, you have permission to link or
|
|
||||||
combine any covered work with a work licensed under version 3 of the GNU Affero
|
|
||||||
General Public License into a single combined work, and to convey the resulting work.
|
|
||||||
The terms of this License will continue to apply to the part which is the covered
|
|
||||||
work, but the special requirements of the GNU Affero General Public License, section
|
|
||||||
13, concerning interaction through a network will apply to the combination as such.
|
|
||||||
|
|
||||||
### 14. Revised Versions of this License
|
|
||||||
|
|
||||||
The Free Software Foundation may publish revised and/or new versions of the GNU
|
|
||||||
General Public License from time to time. Such new versions will be similar in spirit
|
|
||||||
to the present version, but may differ in detail to address new problems or concerns.
|
|
||||||
|
|
||||||
Each version is given a distinguishing version number. If the Program specifies that
|
|
||||||
a certain numbered version of the GNU General Public License “or any later
|
|
||||||
version” applies to it, you have the option of following the terms and
|
|
||||||
conditions either of that numbered version or of any later version published by the
|
|
||||||
Free Software Foundation. If the Program does not specify a version number of the GNU
|
|
||||||
General Public License, you may choose any version ever published by the Free
|
|
||||||
Software Foundation.
|
|
||||||
|
|
||||||
If the Program specifies that a proxy can decide which future versions of the GNU
|
|
||||||
General Public License can be used, that proxy's public statement of acceptance of a
|
|
||||||
version permanently authorizes you to choose that version for the Program.
|
|
||||||
|
|
||||||
Later license versions may give you additional or different permissions. However, no
|
|
||||||
additional obligations are imposed on any author or copyright holder as a result of
|
|
||||||
your choosing to follow a later version.
|
|
||||||
|
|
||||||
### 15. Disclaimer of Warranty
|
|
||||||
|
|
||||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
|
|
||||||
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
|
|
||||||
PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
|
|
||||||
EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
|
||||||
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE
|
|
||||||
QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
|
|
||||||
DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
|
||||||
|
|
||||||
### 16. Limitation of Liability
|
|
||||||
|
|
||||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY
|
|
||||||
COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS
|
|
||||||
PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL,
|
|
||||||
INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
|
|
||||||
PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE
|
|
||||||
OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE
|
|
||||||
WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
|
|
||||||
POSSIBILITY OF SUCH DAMAGES.
|
|
||||||
|
|
||||||
### 17. Interpretation of Sections 15 and 16
|
|
||||||
|
|
||||||
If the disclaimer of warranty and limitation of liability provided above cannot be
|
|
||||||
given local legal effect according to their terms, reviewing courts shall apply local
|
|
||||||
law that most closely approximates an absolute waiver of all civil liability in
|
|
||||||
connection with the Program, unless a warranty or assumption of liability accompanies
|
|
||||||
a copy of the Program in return for a fee.
|
|
||||||
|
|
||||||
_END OF TERMS AND CONDITIONS_
|
|
||||||
|
|
||||||
## How to Apply These Terms to Your New Programs
|
|
||||||
|
|
||||||
If you develop a new program, and you want it to be of the greatest possible use to
|
|
||||||
the public, the best way to achieve this is to make it free software which everyone
|
|
||||||
can redistribute and change under these terms.
|
|
||||||
|
|
||||||
To do so, attach the following notices to the program. It is safest to attach them
|
|
||||||
to the start of each source file to most effectively state the exclusion of warranty;
|
|
||||||
and each file should have at least the “copyright” line and a pointer to
|
|
||||||
where the full notice is found.
|
|
||||||
|
|
||||||
<one line to give the program's name and a brief idea of what it does.>
|
|
||||||
Copyright (C) <year> <name of author>
|
|
||||||
|
|
||||||
This program is free software: you can redistribute it and/or modify
|
|
||||||
it under the terms of the GNU General Public License as published by
|
|
||||||
the Free Software Foundation, either version 3 of the License, or
|
|
||||||
(at your option) any later version.
|
|
||||||
|
|
||||||
This program is distributed in the hope that it will be useful,
|
|
||||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
GNU General Public License for more details.
|
|
||||||
|
|
||||||
You should have received a copy of the GNU General Public License
|
|
||||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
Also add information on how to contact you by electronic and paper mail.
|
|
||||||
|
|
||||||
If the program does terminal interaction, make it output a short notice like this
|
|
||||||
when it starts in an interactive mode:
|
|
||||||
|
|
||||||
<program> Copyright (C) <year> <name of author>
|
|
||||||
This program comes with ABSOLUTELY NO WARRANTY; for details type 'show w'.
|
|
||||||
This is free software, and you are welcome to redistribute it
|
|
||||||
under certain conditions; type 'show c' for details.
|
|
||||||
|
|
||||||
The hypothetical commands `show w` and `show c` should show the appropriate parts of
|
|
||||||
the General Public License. Of course, your program's commands might be different;
|
|
||||||
for a GUI interface, you would use an “about box”.
|
|
||||||
|
|
||||||
You should also get your employer (if you work as a programmer) or school, if any, to
|
|
||||||
sign a “copyright disclaimer” for the program, if necessary. For more
|
|
||||||
information on this, and how to apply and follow the GNU GPL, see
|
|
||||||
<<http://www.gnu.org/licenses/>>.
|
|
||||||
|
|
||||||
The GNU General Public License does not permit incorporating your program into
|
|
||||||
proprietary programs. If your program is a subroutine library, you may consider it
|
|
||||||
more useful to permit linking proprietary applications with the library. If this is
|
|
||||||
what you want to do, use the GNU Lesser General Public License instead of this
|
|
||||||
License. But first, please read
|
|
||||||
<<http://www.gnu.org/philosophy/why-not-lgpl.html>>.
|
|
||||||
147
README.md
147
README.md
@@ -1,147 +0,0 @@
|
|||||||
# FFX
|
|
||||||
|
|
||||||
FFX is a local CLI and Textual TUI for inspecting TV episode files, storing normalization rules in SQLite, and converting outputs into a predictable stream, metadata, and filename layout.
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|
||||||
- Linux-like environment
|
|
||||||
- `python3`
|
|
||||||
- `ffmpeg`
|
|
||||||
- `ffprobe`
|
|
||||||
- `cpulimit`
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
FFX uses a two-step local setup flow.
|
|
||||||
|
|
||||||
### 1. Install The Bundle
|
|
||||||
|
|
||||||
This step creates or reuses the persistent bundle virtualenv in `~/.local/share/ffx.venv`, installs FFX into it, and ensures `ffx` is exposed through a shell alias.
|
|
||||||
|
|
||||||
```sh
|
|
||||||
bash tools/setup.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
If you also want the Python packages needed for the modern test suite:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
bash tools/setup.sh --with-tests
|
|
||||||
```
|
|
||||||
|
|
||||||
You can verify the bundle state without changing anything:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
bash tools/setup.sh --check
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Prepare System Dependencies And Local User Files
|
|
||||||
|
|
||||||
This step installs or verifies workstation dependencies and seeds local config and data directories. It is the step wrapped by the CLI command `ffx configure_workstation`.
|
|
||||||
|
|
||||||
Run it directly:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
bash tools/configure_workstation.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Or through the installed CLI:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
ffx configure_workstation
|
|
||||||
```
|
|
||||||
|
|
||||||
Check-only mode is available in both forms:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
bash tools/configure_workstation.sh --check
|
|
||||||
ffx configure_workstation --check
|
|
||||||
```
|
|
||||||
|
|
||||||
`tools/configure_workstation.sh` does not manage the bundle virtualenv. Python-side test packages belong to `tools/setup.sh --with-tests`.
|
|
||||||
|
|
||||||
## Basic Usage
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
ffx version
|
|
||||||
ffx inspect /path/to/episode.mkv
|
|
||||||
ffx convert /path/to/episode.mkv
|
|
||||||
ffx shows
|
|
||||||
```
|
|
||||||
|
|
||||||
## Modern Tests
|
|
||||||
|
|
||||||
Install Python test packages first:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
bash tools/setup.sh --with-tests
|
|
||||||
```
|
|
||||||
|
|
||||||
Then run the modern automatically discovered test suite:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
./tools/test.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
This runner uses `pytest` and intentionally excludes the legacy harness under `tests/legacy/`.
|
|
||||||
|
|
||||||
## Default Local Paths
|
|
||||||
|
|
||||||
- Config: `~/.local/etc/ffx.json`
|
|
||||||
- Database: `~/.local/var/ffx/ffx.db`
|
|
||||||
- Log file: `~/.local/var/log/ffx.log`
|
|
||||||
- Bundle venv: `~/.local/share/ffx.venv`
|
|
||||||
|
|
||||||
## TMDB
|
|
||||||
|
|
||||||
TMDB-backed metadata enrichment requires `TMDB_API_KEY` to be set in the environment.
|
|
||||||
|
|
||||||
## Version History
|
|
||||||
|
|
||||||
### 0.2.4
|
|
||||||
|
|
||||||
- lightweight CLI commands now stay import-light via lazy runtime loading
|
|
||||||
- setup/config templating moved to `assets/ffx.json.j2`
|
|
||||||
- aligned two-step local setup wrappers: `ffx setup` and `ffx configure_workstation`
|
|
||||||
- combined `ffprobe` payload reuse in `FileProperties`
|
|
||||||
- configurable crop-detect sampling plus per-process crop result caching
|
|
||||||
- single-query controller accessors and conditional DB schema bootstrap
|
|
||||||
- shared screen bootstrap/controller wiring for large detail screens
|
|
||||||
- configurable default season/episode digit lengths
|
|
||||||
- digit-aware `rename` and padded `unmux` filename markers
|
|
||||||
|
|
||||||
### 0.2.3
|
|
||||||
|
|
||||||
- PyPI packaging
|
|
||||||
- output filename templating
|
|
||||||
- season shifting
|
|
||||||
- DB versioning
|
|
||||||
|
|
||||||
### 0.2.2
|
|
||||||
|
|
||||||
- CLI overrides
|
|
||||||
|
|
||||||
### 0.2.1
|
|
||||||
|
|
||||||
- signature handling
|
|
||||||
- tag cleanup
|
|
||||||
- bugfixes and refactoring
|
|
||||||
|
|
||||||
### 0.2.0
|
|
||||||
|
|
||||||
- tests
|
|
||||||
- config file
|
|
||||||
|
|
||||||
### 0.1.3
|
|
||||||
|
|
||||||
- subtitle file imports
|
|
||||||
|
|
||||||
### 0.1.2
|
|
||||||
|
|
||||||
- bugfixes
|
|
||||||
|
|
||||||
### 0.1.1
|
|
||||||
|
|
||||||
- bugfixes
|
|
||||||
- TMDB show identification
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
{
|
|
||||||
"databasePath": {{ database_path_json }},
|
|
||||||
"logDirectory": {{ log_directory_json }},
|
|
||||||
"subtitlesDirectory": {{ subtitles_directory_json }},
|
|
||||||
"defaultIndexSeasonDigits": {{ default_index_season_digits }},
|
|
||||||
"defaultIndexEpisodeDigits": {{ default_index_episode_digits }},
|
|
||||||
"defaultIndicatorSeasonDigits": {{ default_indicator_season_digits }},
|
|
||||||
"defaultIndicatorEpisodeDigits": {{ default_indicator_episode_digits }},
|
|
||||||
"metadata": {
|
|
||||||
"signature": {
|
|
||||||
"RECODED_WITH": "FFX"
|
|
||||||
},
|
|
||||||
"remove": [
|
|
||||||
"VERSION-eng",
|
|
||||||
"creation_time",
|
|
||||||
"NAME"
|
|
||||||
],
|
|
||||||
"streams": {
|
|
||||||
"remove": [
|
|
||||||
"BPS",
|
|
||||||
"NUMBER_OF_FRAMES",
|
|
||||||
"NUMBER_OF_BYTES",
|
|
||||||
"_STATISTICS_WRITING_APP",
|
|
||||||
"_STATISTICS_WRITING_DATE_UTC",
|
|
||||||
"_STATISTICS_TAGS",
|
|
||||||
"BPS-eng",
|
|
||||||
"DURATION-eng",
|
|
||||||
"NUMBER_OF_FRAMES-eng",
|
|
||||||
"NUMBER_OF_BYTES-eng",
|
|
||||||
"_STATISTICS_WRITING_APP-eng",
|
|
||||||
"_STATISTICS_WRITING_DATE_UTC-eng",
|
|
||||||
"_STATISTICS_TAGS-eng"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
513
bin/.ipynb_checkpoints/ffx-checkpoint.py
Executable file
513
bin/.ipynb_checkpoints/ffx-checkpoint.py
Executable file
@@ -0,0 +1,513 @@
|
|||||||
|
#! /usr/bin/python3
|
||||||
|
|
||||||
|
import os, sys, subprocess, json, click, time
|
||||||
|
|
||||||
|
from textual.app import App, ComposeResult
|
||||||
|
from textual.screen import Screen
|
||||||
|
from textual.widgets import Header, Footer, Placeholder
|
||||||
|
|
||||||
|
|
||||||
|
VERSION='0.1.0'
|
||||||
|
|
||||||
|
DEFAULT_VIDEO_ENCODER = 'vp9'
|
||||||
|
|
||||||
|
DEFAULT_QUALITY = 23
|
||||||
|
|
||||||
|
DEFAULT_AV1_PRESET = 5
|
||||||
|
|
||||||
|
DEFAULT_LABEL='output'
|
||||||
|
DEFAULT_FILE_SUFFIX = 'webm'
|
||||||
|
|
||||||
|
DEFAULT_STEREO_BANDWIDTH = "128"
|
||||||
|
DEFAULT_AC3_BANDWIDTH = "256"
|
||||||
|
DEFAULT_DTS_BANDWIDTH = "320"
|
||||||
|
|
||||||
|
DEFAULT_CROP_START = 60
|
||||||
|
DEFAULT_CROP_LENGTH = 180
|
||||||
|
|
||||||
|
TEMP_FILE_NAME = "ffmpeg2pass-0.log"
|
||||||
|
|
||||||
|
|
||||||
|
MKVMERGE_METADATA_KEYS = ['BPS',
|
||||||
|
'NUMBER_OF_FRAMES',
|
||||||
|
'NUMBER_OF_BYTES',
|
||||||
|
'_STATISTICS_WRITING_APP',
|
||||||
|
'_STATISTICS_WRITING_DATE_UTC',
|
||||||
|
'_STATISTICS_TAGS']
|
||||||
|
|
||||||
|
FILE_EXTENSION = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
||||||
|
|
||||||
|
|
||||||
|
COMMAND_TOKENS = ['ffmpeg', '-y', '-i']
|
||||||
|
NULL_TOKENS = ['-f', 'null', '/dev/null']
|
||||||
|
|
||||||
|
STREAM_TYPE_VIDEO = 'video'
|
||||||
|
STREAM_TYPE_AUDIO = 'audio'
|
||||||
|
STREAM_TYPE_SUBTITLE = 'subtitle'
|
||||||
|
|
||||||
|
STREAM_LAYOUT_6_1 = '6.1'
|
||||||
|
STREAM_LAYOUT_5_1 = '5.1(side)'
|
||||||
|
STREAM_LAYOUT_STEREO = 'stereo'
|
||||||
|
STREAM_LAYOUT_6CH = '6ch'
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
class DashboardScreen(Screen):
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
context = self.app.getContext()
|
||||||
|
context['dashboard'] = 'dashboard'
|
||||||
|
|
||||||
|
def compose(self) -> ComposeResult:
|
||||||
|
yield Header(show_clock=True)
|
||||||
|
yield Placeholder("Dashboard Screen")
|
||||||
|
yield Footer()
|
||||||
|
|
||||||
|
|
||||||
|
class SettingsScreen(Screen):
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__()
|
||||||
|
context = self.app.getContext()
|
||||||
|
def compose(self) -> ComposeResult:
|
||||||
|
yield Placeholder("Settings Screen")
|
||||||
|
yield Footer()
|
||||||
|
|
||||||
|
|
||||||
|
class HelpScreen(Screen):
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__()
|
||||||
|
context = self.app.getContext()
|
||||||
|
def compose(self) -> ComposeResult:
|
||||||
|
yield Placeholder("Help Screen")
|
||||||
|
yield Footer()
|
||||||
|
|
||||||
|
|
||||||
|
class ModesApp(App):
|
||||||
|
|
||||||
|
BINDINGS = [
|
||||||
|
("d", "switch_mode('dashboard')", "Dashboard"),
|
||||||
|
("s", "switch_mode('settings')", "Settings"),
|
||||||
|
("h", "switch_mode('help')", "Help"),
|
||||||
|
]
|
||||||
|
|
||||||
|
MODES = {
|
||||||
|
"dashboard": DashboardScreen,
|
||||||
|
"settings": SettingsScreen,
|
||||||
|
"help": HelpScreen,
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(self, context = {}):
|
||||||
|
super().__init__()
|
||||||
|
self.context = context
|
||||||
|
|
||||||
|
def on_mount(self) -> None:
|
||||||
|
self.switch_mode("dashboard")
|
||||||
|
|
||||||
|
def getContext(self):
|
||||||
|
return self.context
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def executeProcess(commandSequence):
|
||||||
|
|
||||||
|
process = subprocess.Popen(commandSequence, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||||
|
|
||||||
|
output, error = process.communicate()
|
||||||
|
|
||||||
|
return output
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#[{'index': 0, 'codec_name': 'vp9', 'codec_long_name': 'Google VP9', 'profile': 'Profile 0', 'codec_type': 'video', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'width': 1920, 'height': 1080, 'coded_width': 1920, 'coded_height': 1080, 'closed_captions': 0, 'film_grain': 0, 'has_b_frames': 0, 'sample_aspect_ratio': '1:1', 'display_aspect_ratio': '16:9', 'pix_fmt': 'yuv420p', 'level': -99, 'color_range': 'tv', 'chroma_location': 'left', 'field_order': 'progressive', 'refs': 1, 'r_frame_rate': '24000/1001', 'avg_frame_rate': '24000/1001', 'time_base': '1/1000', 'start_pts': 0, 'start_time': '0.000000', 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'BPS': '7974017', 'NUMBER_OF_FRAMES': '34382', 'NUMBER_OF_BYTES': '1429358655', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 libvpx-vp9', 'DURATION': '00:23:54.016000000'}}]
|
||||||
|
#[{'index': 1, 'codec_name': 'opus', 'codec_long_name': 'Opus (Opus Interactive Audio Codec)', 'codec_type': 'audio', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'sample_fmt': 'fltp', 'sample_rate': '48000', 'channels': 2, 'channel_layout': 'stereo', 'bits_per_sample': 0, 'initial_padding': 312, 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'extradata_size': 19, 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'jpn', 'title': 'Japanisch', 'BPS': '128000', 'NUMBER_OF_FRAMES': '61763', 'NUMBER_OF_BYTES': '22946145', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 libopus', 'DURATION': '00:23:54.141000000'}}]
|
||||||
|
|
||||||
|
#[{'index': 2, 'codec_name': 'webvtt', 'codec_long_name': 'WebVTT subtitle', 'codec_type': 'subtitle', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'duration_ts': 1434141, 'duration': '1434.141000', 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'ger', 'title': 'Deutsch [Full]', 'BPS': '118', 'NUMBER_OF_FRAMES': '300', 'NUMBER_OF_BYTES': '21128', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 webvtt', 'DURATION': '00:23:54.010000000'}}, {'index': 3, 'codec_name': 'webvtt', 'codec_long_name': 'WebVTT subtitle', 'codec_type': 'subtitle', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'duration_ts': 1434141, 'duration': '1434.141000', 'disposition': {'default': 0, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'eng', 'title': 'Englisch [Full]', 'BPS': '101', 'NUMBER_OF_FRAMES': '276', 'NUMBER_OF_BYTES': '16980', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 webvtt', 'DURATION': '00:23:53.230000000'}}]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def getStreamDescriptor(filename):
|
||||||
|
|
||||||
|
ffprobeOutput = executeProcess(["ffprobe",
|
||||||
|
"-show_streams",
|
||||||
|
"-of", "json",
|
||||||
|
filename])
|
||||||
|
|
||||||
|
streamData = json.loads(ffprobeOutput)['streams']
|
||||||
|
|
||||||
|
descriptor = []
|
||||||
|
|
||||||
|
i = 0
|
||||||
|
for d in [s for s in streamData if s['codec_type'] == STREAM_TYPE_VIDEO]:
|
||||||
|
descriptor.append({
|
||||||
|
'index': d['index'],
|
||||||
|
'sub_index': i,
|
||||||
|
'type': STREAM_TYPE_VIDEO,
|
||||||
|
'codec': d['codec_name']
|
||||||
|
})
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
i = 0
|
||||||
|
for d in [s for s in streamData if s['codec_type'] == STREAM_TYPE_AUDIO]:
|
||||||
|
|
||||||
|
streamDescriptor = {
|
||||||
|
'index': d['index'],
|
||||||
|
'sub_index': i,
|
||||||
|
'type': STREAM_TYPE_AUDIO,
|
||||||
|
'codec': d['codec_name'],
|
||||||
|
'channels': d['channels']
|
||||||
|
}
|
||||||
|
|
||||||
|
if 'channel_layout' in d.keys():
|
||||||
|
streamDescriptor['layout'] = d['channel_layout']
|
||||||
|
elif d['channels'] == 6:
|
||||||
|
streamDescriptor['layout'] = STREAM_LAYOUT_6CH
|
||||||
|
else:
|
||||||
|
streamDescriptor['layout'] = 'undefined'
|
||||||
|
|
||||||
|
descriptor.append(streamDescriptor)
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
i = 0
|
||||||
|
for d in [s for s in streamData if s['codec_type'] == STREAM_TYPE_SUBTITLE]:
|
||||||
|
descriptor.append({
|
||||||
|
'index': d['index'],
|
||||||
|
'sub_index': i,
|
||||||
|
'type': STREAM_TYPE_SUBTITLE,
|
||||||
|
'codec': d['codec_name']
|
||||||
|
})
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
return descriptor
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def generateAV1Tokens(q, p):
|
||||||
|
|
||||||
|
return ['-c:v:0', 'libsvtav1',
|
||||||
|
'-svtav1-params', f"crf={q}:preset={p}:tune=0:enable-overlays=1:scd=1:scm=0",
|
||||||
|
'-pix_fmt', 'yuv420p10le']
|
||||||
|
|
||||||
|
def generateVP9Pass1Tokens(q):
|
||||||
|
|
||||||
|
return ['-c:v:0', 'libvpx-vp9',
|
||||||
|
'-row-mt', '1',
|
||||||
|
'-crf', str(q),
|
||||||
|
'-pass', '1',
|
||||||
|
'-speed', '4',
|
||||||
|
'-frame-parallel', '0',
|
||||||
|
'-g', '9999',
|
||||||
|
'-aq-mode', '0']
|
||||||
|
|
||||||
|
def generateVP9Pass2Tokens(q):
|
||||||
|
|
||||||
|
return ['-c:v:0', 'libvpx-vp9',
|
||||||
|
'-row-mt', '1',
|
||||||
|
'-crf', str(q),
|
||||||
|
'-pass', '2',
|
||||||
|
'-frame-parallel', '0',
|
||||||
|
'-g', '9999',
|
||||||
|
'-aq-mode', '0',
|
||||||
|
'-auto-alt-ref', '1',
|
||||||
|
'-lag-in-frames', '25']
|
||||||
|
|
||||||
|
|
||||||
|
def generateCropTokens(start, length):
|
||||||
|
|
||||||
|
return ['-ss', str(start), '-t', str(length)]
|
||||||
|
|
||||||
|
|
||||||
|
def generateDenoiseTokens(spatial=5, patch=7, research=7, hw=False):
|
||||||
|
filterName = 'nlmeans_opencl' if hw else 'nlmeans'
|
||||||
|
return ['-vf', f"{filterName}=s={spatial}:p={patch}:r={research}"]
|
||||||
|
|
||||||
|
|
||||||
|
def generateOutputTokens(f, suffix, q=None):
|
||||||
|
|
||||||
|
if q is None:
|
||||||
|
return ['-f', 'webm', f"{f}.{suffix}"]
|
||||||
|
else:
|
||||||
|
return ['-f', 'webm', f"{f}_q{q}.{suffix}"]
|
||||||
|
|
||||||
|
|
||||||
|
# preset = DEFAULT_AV1_PRESET
|
||||||
|
# presetTokens = [p for p in sys.argv if p.startswith('p=')]
|
||||||
|
# if presetTokens:
|
||||||
|
# preset = int(presetTokens[0].split('=')[1])
|
||||||
|
|
||||||
|
# cropStart = ''
|
||||||
|
# cropLength = ''
|
||||||
|
# cropTokens = [c for c in sys.argv if c.startswith('crop')]
|
||||||
|
# if cropTokens:
|
||||||
|
# if '=' in cropTokens[0]:
|
||||||
|
# cropString = cropTokens[0].split('=')[1]
|
||||||
|
# cropStart, cropLength = cropString.split(',')
|
||||||
|
# else:
|
||||||
|
# cropStart = 60
|
||||||
|
# cropLength = 180
|
||||||
|
#
|
||||||
|
# denoiseTokens = [d for d in sys.argv if d.startswith('denoise')]
|
||||||
|
#
|
||||||
|
|
||||||
|
# for aStream in audioStreams:
|
||||||
|
# if 'channel_layout' in aStream:
|
||||||
|
# print(f"audio stream: {aStream['channel_layout']}") #channel_layout
|
||||||
|
# else:
|
||||||
|
# print(f"unknown audio stream with {aStream['channels']} channels") #channel_layout
|
||||||
|
|
||||||
|
def generateAudioTokens(context, index, layout):
|
||||||
|
|
||||||
|
if layout == STREAM_LAYOUT_6_1:
|
||||||
|
return [f"-c:a:{index}",
|
||||||
|
'libopus',
|
||||||
|
f"-filter:a:{index}",
|
||||||
|
'channelmap=channel_layout=6.1',
|
||||||
|
f"-b:a:{index}",
|
||||||
|
context['bitrates']['dts']]
|
||||||
|
|
||||||
|
elif layout == STREAM_LAYOUT_5_1:
|
||||||
|
return [f"-c:a:{index}",
|
||||||
|
'libopus',
|
||||||
|
f"-filter:a:{index}",
|
||||||
|
"channelmap=FL-FL|FR-FR|FC-FC|LFE-LFE|SL-BL|SR-BR:5.1",
|
||||||
|
f"-b:a:{index}",
|
||||||
|
context['bitrates']['ac3']]
|
||||||
|
|
||||||
|
elif layout == STREAM_LAYOUT_STEREO:
|
||||||
|
return [f"-c:a:{index}",
|
||||||
|
'libopus',
|
||||||
|
f"-b:a:{index}",
|
||||||
|
context['bitrates']['stereo']]
|
||||||
|
|
||||||
|
elif layout == STREAM_LAYOUT_6CH:
|
||||||
|
return [f"-c:a:{index}",
|
||||||
|
'libopus',
|
||||||
|
f"-filter:a:{index}",
|
||||||
|
"channelmap=FL-FL|FR-FR|FC-FC|LFE-LFE|SL-BL|SR-BR:5.1",
|
||||||
|
f"-b:a:{index}",
|
||||||
|
context['bitrates']['ac3']]
|
||||||
|
else:
|
||||||
|
return []
|
||||||
|
|
||||||
|
|
||||||
|
def generateClearTokens(streams):
|
||||||
|
clearTokens = []
|
||||||
|
for s in streams:
|
||||||
|
for k in MKVMERGE_METADATA_KEYS:
|
||||||
|
clearTokens += [f"-metadata:s:{s['type'][0]}:{s['sub_index']}", f"{k}="]
|
||||||
|
return clearTokens
|
||||||
|
|
||||||
|
|
||||||
|
@click.group()
|
||||||
|
@click.pass_context
|
||||||
|
def ffx(ctx):
|
||||||
|
"""FFX"""
|
||||||
|
ctx.obj = {}
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
# Define a subcommand
|
||||||
|
@ffx.command()
|
||||||
|
def version():
|
||||||
|
click.echo(VERSION)
|
||||||
|
|
||||||
|
|
||||||
|
# Another subcommand
|
||||||
|
@ffx.command()
|
||||||
|
def help():
|
||||||
|
click.echo(f"ffx {VERSION}\n")
|
||||||
|
click.echo(f"Usage: ffx [input file] [output file] [vp9|av1] [q=[nn[,nn,...]]] [p=nn] [a=nnn[k]] [ac3=nnn[k]] [dts=nnn[k]] [crop]")
|
||||||
|
|
||||||
|
|
||||||
|
@click.argument('filename', nargs=1)
|
||||||
|
@ffx.command()
|
||||||
|
def streams(filename):
|
||||||
|
for d in getStreamDescriptor(filename):
|
||||||
|
click.echo(f"{d['codec']}{' (' + str(d['channels']) + ')' if d['type'] == 'audio' else ''}")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ffx.command()
|
||||||
|
@click.pass_context
|
||||||
|
|
||||||
|
@click.argument('paths', nargs=-1)
|
||||||
|
@click.option('-l', '--label', type=str, default=DEFAULT_LABEL, help='Label to be used as filename prefix')
|
||||||
|
|
||||||
|
@click.option('-v', '--video-encoder', type=str, default=DEFAULT_VIDEO_ENCODER, help='Target video encoder (vp9 or av1) default: vp9')
|
||||||
|
|
||||||
|
@click.option('-q', '--quality', type=str, default=DEFAULT_QUALITY, help='Quality settings to be used with VP9 encoder (default: 23)')
|
||||||
|
@click.option('-p', '--preset', type=str, default=DEFAULT_QUALITY, help='Quality preset to be used with AV1 encoder (default: 5)')
|
||||||
|
|
||||||
|
@click.option('-a', '--stereo-bitrate', type=int, default=DEFAULT_STEREO_BANDWIDTH, help='Bitrate in kbit/s to be used to encode stereo audio streams')
|
||||||
|
@click.option('-ac3', '--ac3-bitrate', type=int, default=DEFAULT_AC3_BANDWIDTH, help='Bitrate in kbit/s to be used to encode 5.1 audio streams')
|
||||||
|
@click.option('-dts', '--dts-bitrate', type=int, default=DEFAULT_DTS_BANDWIDTH, help='Bitrate in kbit/s to be used to encode 6.1 audio streams')
|
||||||
|
|
||||||
|
@click.option('-ds', '--default-subtitle', type=int, help='Index of default subtitle stream')
|
||||||
|
|
||||||
|
@click.option('-fa', '--forced-audio', type=int, help='Index of forced audio stream (including default audio stream tag)')
|
||||||
|
@click.option('-da', '--default-audio', type=int, help='Index of default audio stream')
|
||||||
|
|
||||||
|
|
||||||
|
@click.option("--crop", is_flag=False, flag_value="default", default="none")
|
||||||
|
|
||||||
|
@click.option("-c", "--clear-metadata", is_flag=True, default=False)
|
||||||
|
@click.option("-d", "--denoise", is_flag=True, default=False)
|
||||||
|
|
||||||
|
|
||||||
|
def convert(ctx, paths, label, video_encoder, quality, preset, stereo_bitrate, ac3_bitrate, dts_bitrate, crop, clear_metadata, default_subtitle, forced_audio, default_audio, denoise):
|
||||||
|
"""Batch conversion of audiovideo files in format suitable for web playback, e.g. jellyfin
|
||||||
|
|
||||||
|
Files found under PATHS will be converted according to parameters.
|
||||||
|
Filename extensions will be changed appropriately.
|
||||||
|
Suffices will we appended to filename in case of multiple created files
|
||||||
|
or if the filename has not changed."""
|
||||||
|
|
||||||
|
#startTime = time.perf_counter()
|
||||||
|
|
||||||
|
#sourcePath = paths[0]
|
||||||
|
#targetFilename = paths[1]
|
||||||
|
|
||||||
|
#if not os.path.isfile(sourcePath):
|
||||||
|
# raise click.ClickException(f"There is no file with path {sourcePath}")
|
||||||
|
|
||||||
|
#click.echo(f"src: {sourcePath} tgt: {targetFilename}")
|
||||||
|
|
||||||
|
|
||||||
|
#click.echo(f"ve={video_encoder}")
|
||||||
|
|
||||||
|
|
||||||
|
#qualityTokens = quality.split(',')
|
||||||
|
|
||||||
|
#q_list = [q for q in qualityTokens if q.isnumeric()]
|
||||||
|
|
||||||
|
#click.echo(q_list)
|
||||||
|
|
||||||
|
#ctx.obj['bitrates'] = {}
|
||||||
|
#ctx.obj['bitrates']['stereo'] = str(stereo_bitrate) if str(stereo_bitrate).endswith('k') else f"{stereo_bitrate}k"
|
||||||
|
#ctx.obj['bitrates']['ac3'] = str(ac3_bitrate) if str(ac3_bitrate).endswith('k') else f"{ac3_bitrate}k"
|
||||||
|
#ctx.obj['bitrates']['dts'] = str(dts_bitrate) if str(dts_bitrate).endswith('k') else f"{dts_bitrate}k"
|
||||||
|
|
||||||
|
|
||||||
|
#click.echo(f"a={ctx.obj['bitrates']['stereo']}")
|
||||||
|
#click.echo(f"ac3={ctx.obj['bitrates']['ac3']}")
|
||||||
|
#click.echo(f"dts={ctx.obj['bitrates']['dts']}")
|
||||||
|
|
||||||
|
|
||||||
|
#performCrop = (crop != 'none')
|
||||||
|
|
||||||
|
|
||||||
|
#if performCrop:
|
||||||
|
|
||||||
|
#cropTokens = crop.split(',')
|
||||||
|
|
||||||
|
#if cropTokens and len(cropTokens) == 2:
|
||||||
|
|
||||||
|
#cropStart, cropLength = crop.split(',')
|
||||||
|
#else:
|
||||||
|
#cropStart = DEFAULT_CROP_START
|
||||||
|
#cropLength = DEFAULT_CROP_LENGTH
|
||||||
|
|
||||||
|
#click.echo(f"crop start={cropStart} length={cropLength}")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#click.echo(f"\nRunning {len(q_list)} jobs")
|
||||||
|
|
||||||
|
|
||||||
|
#streamDescriptor = getStreamDescriptor(sourcePath)
|
||||||
|
|
||||||
|
#commandTokens = COMMAND_TOKENS + [sourcePath]
|
||||||
|
|
||||||
|
|
||||||
|
#for q in q_list:
|
||||||
|
|
||||||
|
#click.echo(f"\nRunning job q={q}")
|
||||||
|
|
||||||
|
#mappingVideoTokens = ['-map', 'v:0']
|
||||||
|
#mappingTokens = mappingVideoTokens.copy()
|
||||||
|
#audioTokens = []
|
||||||
|
|
||||||
|
#audioIndex = 0
|
||||||
|
#for audioStreamDescriptor in streamDescriptor:
|
||||||
|
|
||||||
|
#if audioStreamDescriptor['type'] == STREAM_TYPE_AUDIO:
|
||||||
|
|
||||||
|
#mappingTokens += ['-map', f"a:{audioIndex}"]
|
||||||
|
#audioTokens += generateAudioTokens(ctx.obj, audioIndex, audioStreamDescriptor['layout'])
|
||||||
|
#audioIndex += 1
|
||||||
|
|
||||||
|
|
||||||
|
#for s in range(len([d for d in streamDescriptor if d['type'] == STREAM_TYPE_SUBTITLE])):
|
||||||
|
#mappingTokens += ['-map', f"s:{s}"]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#if video_encoder == 'av1':
|
||||||
|
|
||||||
|
#commandSequence = commandTokens + mappingTokens + audioTokens + generateAV1Tokens(q, preset) + audioTokens
|
||||||
|
|
||||||
|
#if clear_metadata:
|
||||||
|
#commandSequence += generateClearTokens(streamDescriptor)
|
||||||
|
|
||||||
|
#if performCrop:
|
||||||
|
#commandSequence += generateCropTokens(cropStart, cropLength)
|
||||||
|
|
||||||
|
#commandSequence += generateOutputTokens(targetFilename, DEFAULT_FILE_SUFFIX, q)
|
||||||
|
|
||||||
|
#click.echo(f"Command: {' '.join(commandSequence)}")
|
||||||
|
|
||||||
|
#executeProcess(commandSequence)
|
||||||
|
|
||||||
|
|
||||||
|
#if video_encoder == 'vp9':
|
||||||
|
|
||||||
|
#commandSequence1 = commandTokens + mappingVideoTokens + generateVP9Pass1Tokens(q)
|
||||||
|
|
||||||
|
#if performCrop:
|
||||||
|
# commandSequence1 += generateCropTokens(cropStart, cropLength)
|
||||||
|
|
||||||
|
#commandSequence1 += NULL_TOKENS
|
||||||
|
|
||||||
|
#click.echo(f"Command 1: {' '.join(commandSequence1)}")
|
||||||
|
|
||||||
|
#if os.path.exists(TEMP_FILE_NAME):
|
||||||
|
# os.remove(TEMP_FILE_NAME)
|
||||||
|
|
||||||
|
#executeProcess(commandSequence1)
|
||||||
|
|
||||||
|
|
||||||
|
#commandSequence2 = commandTokens + mappingTokens
|
||||||
|
|
||||||
|
#if denoise:
|
||||||
|
# commandSequence2 += generateDenoiseTokens()
|
||||||
|
|
||||||
|
#commandSequence2 += generateVP9Pass2Tokens(q) + audioTokens
|
||||||
|
|
||||||
|
#if clear_metadata:
|
||||||
|
# commandSequence2 += generateClearTokens(streamDescriptor)
|
||||||
|
|
||||||
|
#if performCrop:
|
||||||
|
# commandSequence2 += generateCropTokens(cropStart, cropLength)
|
||||||
|
|
||||||
|
#commandSequence2 += generateOutputTokens(targetFilename, DEFAULT_FILE_SUFFIX, q)
|
||||||
|
|
||||||
|
#click.echo(f"Command 2: {' '.join(commandSequence2)}")
|
||||||
|
|
||||||
|
#executeProcess(commandSequence2)
|
||||||
|
|
||||||
|
|
||||||
|
#click.echo('\nDONE\n')
|
||||||
|
|
||||||
|
#endTime = time.perf_counter()
|
||||||
|
#click.echo(f"Time elapsed {endTime - startTime}")
|
||||||
|
|
||||||
|
app = ModesApp(ctx.obj)
|
||||||
|
app.run()
|
||||||
|
|
||||||
|
click.echo(f"app result: {app.getContext()}")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
ffx()
|
||||||
444
bin/ffx.py
Executable file
444
bin/ffx.py
Executable file
@@ -0,0 +1,444 @@
|
|||||||
|
#! /usr/bin/python3
|
||||||
|
|
||||||
|
import os, sys, subprocess, json, click, time
|
||||||
|
|
||||||
|
VERSION='0.1.0'
|
||||||
|
|
||||||
|
DEFAULT_VIDEO_ENCODER = 'vp9'
|
||||||
|
|
||||||
|
DEFAULT_QUALITY = 23
|
||||||
|
|
||||||
|
DEFAULT_AV1_PRESET = 5
|
||||||
|
|
||||||
|
DEFAULT_LABEL='output'
|
||||||
|
DEFAULT_FILE_SUFFIX = 'webm'
|
||||||
|
|
||||||
|
DEFAULT_STEREO_BANDWIDTH = "128"
|
||||||
|
DEFAULT_AC3_BANDWIDTH = "256"
|
||||||
|
DEFAULT_DTS_BANDWIDTH = "320"
|
||||||
|
|
||||||
|
DEFAULT_CROP_START = 60
|
||||||
|
DEFAULT_CROP_LENGTH = 180
|
||||||
|
|
||||||
|
TEMP_FILE_NAME = "ffmpeg2pass-0.log"
|
||||||
|
|
||||||
|
|
||||||
|
MKVMERGE_METADATA_KEYS = ['BPS',
|
||||||
|
'NUMBER_OF_FRAMES',
|
||||||
|
'NUMBER_OF_BYTES',
|
||||||
|
'_STATISTICS_WRITING_APP',
|
||||||
|
'_STATISTICS_WRITING_DATE_UTC',
|
||||||
|
'_STATISTICS_TAGS']
|
||||||
|
|
||||||
|
FILE_EXTENSION = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
||||||
|
|
||||||
|
|
||||||
|
COMMAND_TOKENS = ['ffmpeg', '-y', '-i']
|
||||||
|
NULL_TOKENS = ['-f', 'null', '/dev/null']
|
||||||
|
|
||||||
|
STREAM_TYPE_VIDEO = 'video'
|
||||||
|
STREAM_TYPE_AUDIO = 'audio'
|
||||||
|
STREAM_TYPE_SUBTITLE = 'subtitle'
|
||||||
|
|
||||||
|
STREAM_LAYOUT_6_1 = '6.1'
|
||||||
|
STREAM_LAYOUT_5_1 = '5.1(side)'
|
||||||
|
STREAM_LAYOUT_STEREO = 'stereo'
|
||||||
|
STREAM_LAYOUT_6CH = '6ch'
|
||||||
|
|
||||||
|
|
||||||
|
def executeProcess(commandSequence):
|
||||||
|
|
||||||
|
process = subprocess.Popen(commandSequence, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||||
|
|
||||||
|
output, error = process.communicate()
|
||||||
|
|
||||||
|
return output
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#[{'index': 0, 'codec_name': 'vp9', 'codec_long_name': 'Google VP9', 'profile': 'Profile 0', 'codec_type': 'video', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'width': 1920, 'height': 1080, 'coded_width': 1920, 'coded_height': 1080, 'closed_captions': 0, 'film_grain': 0, 'has_b_frames': 0, 'sample_aspect_ratio': '1:1', 'display_aspect_ratio': '16:9', 'pix_fmt': 'yuv420p', 'level': -99, 'color_range': 'tv', 'chroma_location': 'left', 'field_order': 'progressive', 'refs': 1, 'r_frame_rate': '24000/1001', 'avg_frame_rate': '24000/1001', 'time_base': '1/1000', 'start_pts': 0, 'start_time': '0.000000', 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'BPS': '7974017', 'NUMBER_OF_FRAMES': '34382', 'NUMBER_OF_BYTES': '1429358655', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 libvpx-vp9', 'DURATION': '00:23:54.016000000'}}]
|
||||||
|
#[{'index': 1, 'codec_name': 'opus', 'codec_long_name': 'Opus (Opus Interactive Audio Codec)', 'codec_type': 'audio', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'sample_fmt': 'fltp', 'sample_rate': '48000', 'channels': 2, 'channel_layout': 'stereo', 'bits_per_sample': 0, 'initial_padding': 312, 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'extradata_size': 19, 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'jpn', 'title': 'Japanisch', 'BPS': '128000', 'NUMBER_OF_FRAMES': '61763', 'NUMBER_OF_BYTES': '22946145', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 libopus', 'DURATION': '00:23:54.141000000'}}]
|
||||||
|
|
||||||
|
#[{'index': 2, 'codec_name': 'webvtt', 'codec_long_name': 'WebVTT subtitle', 'codec_type': 'subtitle', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'duration_ts': 1434141, 'duration': '1434.141000', 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'ger', 'title': 'Deutsch [Full]', 'BPS': '118', 'NUMBER_OF_FRAMES': '300', 'NUMBER_OF_BYTES': '21128', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 webvtt', 'DURATION': '00:23:54.010000000'}}, {'index': 3, 'codec_name': 'webvtt', 'codec_long_name': 'WebVTT subtitle', 'codec_type': 'subtitle', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'duration_ts': 1434141, 'duration': '1434.141000', 'disposition': {'default': 0, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'eng', 'title': 'Englisch [Full]', 'BPS': '101', 'NUMBER_OF_FRAMES': '276', 'NUMBER_OF_BYTES': '16980', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 webvtt', 'DURATION': '00:23:53.230000000'}}]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def getStreamDescriptor(filename):
|
||||||
|
|
||||||
|
ffprobeOutput = executeProcess(["ffprobe",
|
||||||
|
"-show_streams",
|
||||||
|
"-of", "json",
|
||||||
|
filename])
|
||||||
|
|
||||||
|
streamData = json.loads(ffprobeOutput)['streams']
|
||||||
|
|
||||||
|
descriptor = []
|
||||||
|
|
||||||
|
i = 0
|
||||||
|
for d in [s for s in streamData if s['codec_type'] == STREAM_TYPE_VIDEO]:
|
||||||
|
descriptor.append({
|
||||||
|
'index': d['index'],
|
||||||
|
'sub_index': i,
|
||||||
|
'type': STREAM_TYPE_VIDEO,
|
||||||
|
'codec': d['codec_name']
|
||||||
|
})
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
i = 0
|
||||||
|
for d in [s for s in streamData if s['codec_type'] == STREAM_TYPE_AUDIO]:
|
||||||
|
|
||||||
|
streamDescriptor = {
|
||||||
|
'index': d['index'],
|
||||||
|
'sub_index': i,
|
||||||
|
'type': STREAM_TYPE_AUDIO,
|
||||||
|
'codec': d['codec_name'],
|
||||||
|
'channels': d['channels']
|
||||||
|
}
|
||||||
|
|
||||||
|
if 'channel_layout' in d.keys():
|
||||||
|
streamDescriptor['layout'] = d['channel_layout']
|
||||||
|
elif d['channels'] == 6:
|
||||||
|
streamDescriptor['layout'] = STREAM_LAYOUT_6CH
|
||||||
|
else:
|
||||||
|
streamDescriptor['layout'] = 'undefined'
|
||||||
|
|
||||||
|
descriptor.append(streamDescriptor)
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
i = 0
|
||||||
|
for d in [s for s in streamData if s['codec_type'] == STREAM_TYPE_SUBTITLE]:
|
||||||
|
descriptor.append({
|
||||||
|
'index': d['index'],
|
||||||
|
'sub_index': i,
|
||||||
|
'type': STREAM_TYPE_SUBTITLE,
|
||||||
|
'codec': d['codec_name']
|
||||||
|
})
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
return descriptor
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def generateAV1Tokens(q, p):
|
||||||
|
|
||||||
|
return ['-c:v:0', 'libsvtav1',
|
||||||
|
'-svtav1-params', f"crf={q}:preset={p}:tune=0:enable-overlays=1:scd=1:scm=0",
|
||||||
|
'-pix_fmt', 'yuv420p10le']
|
||||||
|
|
||||||
|
def generateVP9Pass1Tokens(q):
|
||||||
|
|
||||||
|
return ['-c:v:0', 'libvpx-vp9',
|
||||||
|
'-row-mt', '1',
|
||||||
|
'-crf', str(q),
|
||||||
|
'-pass', '1',
|
||||||
|
'-speed', '4',
|
||||||
|
'-frame-parallel', '0',
|
||||||
|
'-g', '9999',
|
||||||
|
'-aq-mode', '0']
|
||||||
|
|
||||||
|
def generateVP9Pass2Tokens(q):
|
||||||
|
|
||||||
|
return ['-c:v:0', 'libvpx-vp9',
|
||||||
|
'-row-mt', '1',
|
||||||
|
'-crf', str(q),
|
||||||
|
'-pass', '2',
|
||||||
|
'-frame-parallel', '0',
|
||||||
|
'-g', '9999',
|
||||||
|
'-aq-mode', '0',
|
||||||
|
'-auto-alt-ref', '1',
|
||||||
|
'-lag-in-frames', '25']
|
||||||
|
|
||||||
|
|
||||||
|
def generateCropTokens(start, length):
|
||||||
|
|
||||||
|
return ['-ss', str(start), '-t', str(length)]
|
||||||
|
|
||||||
|
|
||||||
|
def generateDenoiseTokens(spatial=5, patch=7, research=7, hw=False):
|
||||||
|
filterName = 'nlmeans_opencl' if hw else 'nlmeans'
|
||||||
|
return ['-vf', f"{filterName}=s={spatial}:p={patch}:r={research}"]
|
||||||
|
|
||||||
|
|
||||||
|
def generateOutputTokens(f, suffix, q=None):
|
||||||
|
|
||||||
|
if q is None:
|
||||||
|
return ['-f', 'webm', f"{f}.{suffix}"]
|
||||||
|
else:
|
||||||
|
return ['-f', 'webm', f"{f}_q{q}.{suffix}"]
|
||||||
|
|
||||||
|
|
||||||
|
# preset = DEFAULT_AV1_PRESET
|
||||||
|
# presetTokens = [p for p in sys.argv if p.startswith('p=')]
|
||||||
|
# if presetTokens:
|
||||||
|
# preset = int(presetTokens[0].split('=')[1])
|
||||||
|
|
||||||
|
# cropStart = ''
|
||||||
|
# cropLength = ''
|
||||||
|
# cropTokens = [c for c in sys.argv if c.startswith('crop')]
|
||||||
|
# if cropTokens:
|
||||||
|
# if '=' in cropTokens[0]:
|
||||||
|
# cropString = cropTokens[0].split('=')[1]
|
||||||
|
# cropStart, cropLength = cropString.split(',')
|
||||||
|
# else:
|
||||||
|
# cropStart = 60
|
||||||
|
# cropLength = 180
|
||||||
|
#
|
||||||
|
# denoiseTokens = [d for d in sys.argv if d.startswith('denoise')]
|
||||||
|
#
|
||||||
|
|
||||||
|
# for aStream in audioStreams:
|
||||||
|
# if 'channel_layout' in aStream:
|
||||||
|
# print(f"audio stream: {aStream['channel_layout']}") #channel_layout
|
||||||
|
# else:
|
||||||
|
# print(f"unknown audio stream with {aStream['channels']} channels") #channel_layout
|
||||||
|
|
||||||
|
def generateAudioTokens(context, index, layout):
|
||||||
|
|
||||||
|
if layout == STREAM_LAYOUT_6_1:
|
||||||
|
return [f"-c:a:{index}",
|
||||||
|
'libopus',
|
||||||
|
f"-filter:a:{index}",
|
||||||
|
'channelmap=channel_layout=6.1',
|
||||||
|
f"-b:a:{index}",
|
||||||
|
context['bitrates']['dts']]
|
||||||
|
|
||||||
|
elif layout == STREAM_LAYOUT_5_1:
|
||||||
|
return [f"-c:a:{index}",
|
||||||
|
'libopus',
|
||||||
|
f"-filter:a:{index}",
|
||||||
|
"channelmap=FL-FL|FR-FR|FC-FC|LFE-LFE|SL-BL|SR-BR:5.1",
|
||||||
|
f"-b:a:{index}",
|
||||||
|
context['bitrates']['ac3']]
|
||||||
|
|
||||||
|
elif layout == STREAM_LAYOUT_STEREO:
|
||||||
|
return [f"-c:a:{index}",
|
||||||
|
'libopus',
|
||||||
|
f"-b:a:{index}",
|
||||||
|
context['bitrates']['stereo']]
|
||||||
|
|
||||||
|
elif layout == STREAM_LAYOUT_6CH:
|
||||||
|
return [f"-c:a:{index}",
|
||||||
|
'libopus',
|
||||||
|
f"-filter:a:{index}",
|
||||||
|
"channelmap=FL-FL|FR-FR|FC-FC|LFE-LFE|SL-BL|SR-BR:5.1",
|
||||||
|
f"-b:a:{index}",
|
||||||
|
context['bitrates']['ac3']]
|
||||||
|
else:
|
||||||
|
return []
|
||||||
|
|
||||||
|
|
||||||
|
def generateClearTokens(streams):
|
||||||
|
clearTokens = []
|
||||||
|
for s in streams:
|
||||||
|
for k in MKVMERGE_METADATA_KEYS:
|
||||||
|
clearTokens += [f"-metadata:s:{s['type'][0]}:{s['sub_index']}", f"{k}="]
|
||||||
|
return clearTokens
|
||||||
|
|
||||||
|
|
||||||
|
@click.group()
|
||||||
|
@click.pass_context
|
||||||
|
def ffx(ctx):
|
||||||
|
"""FFX"""
|
||||||
|
ctx.obj = {}
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
# Define a subcommand
|
||||||
|
@ffx.command()
|
||||||
|
def version():
|
||||||
|
click.echo(VERSION)
|
||||||
|
|
||||||
|
|
||||||
|
# Another subcommand
|
||||||
|
@ffx.command()
|
||||||
|
def help():
|
||||||
|
click.echo(f"ffx {VERSION}\n")
|
||||||
|
click.echo(f"Usage: ffx [input file] [output file] [vp9|av1] [q=[nn[,nn,...]]] [p=nn] [a=nnn[k]] [ac3=nnn[k]] [dts=nnn[k]] [crop]")
|
||||||
|
|
||||||
|
|
||||||
|
@click.argument('filename', nargs=1)
|
||||||
|
@ffx.command()
|
||||||
|
def streams(filename):
|
||||||
|
for d in getStreamDescriptor(filename):
|
||||||
|
click.echo(f"{d['codec']}{' (' + str(d['channels']) + ')' if d['type'] == 'audio' else ''}")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ffx.command()
|
||||||
|
@click.pass_context
|
||||||
|
|
||||||
|
@click.argument('paths', nargs=-1)
|
||||||
|
@click.option('-l', '--label', type=str, default=DEFAULT_LABEL, help='Label to be used as filename prefix')
|
||||||
|
|
||||||
|
@click.option('-v', '--video-encoder', type=str, default=DEFAULT_VIDEO_ENCODER, help='Target video encoder (vp9 or av1) default: vp9')
|
||||||
|
|
||||||
|
@click.option('-q', '--quality', type=str, default=DEFAULT_QUALITY, help='Quality settings to be used with VP9 encoder (default: 23)')
|
||||||
|
@click.option('-p', '--preset', type=str, default=DEFAULT_QUALITY, help='Quality preset to be used with AV1 encoder (default: 5)')
|
||||||
|
|
||||||
|
@click.option('-a', '--stereo-bitrate', type=int, default=DEFAULT_STEREO_BANDWIDTH, help='Bitrate in kbit/s to be used to encode stereo audio streams')
|
||||||
|
@click.option('-ac3', '--ac3-bitrate', type=int, default=DEFAULT_AC3_BANDWIDTH, help='Bitrate in kbit/s to be used to encode 5.1 audio streams')
|
||||||
|
@click.option('-dts', '--dts-bitrate', type=int, default=DEFAULT_DTS_BANDWIDTH, help='Bitrate in kbit/s to be used to encode 6.1 audio streams')
|
||||||
|
|
||||||
|
@click.option('-ds', '--default-subtitle', type=int, help='Index of default subtitle stream')
|
||||||
|
|
||||||
|
@click.option('-fa', '--forced-audio', type=int, help='Index of forced audio stream (including default audio stream tag)')
|
||||||
|
@click.option('-da', '--default-audio', type=int, help='Index of default audio stream')
|
||||||
|
|
||||||
|
|
||||||
|
@click.option("--crop", is_flag=False, flag_value="default", default="none")
|
||||||
|
|
||||||
|
@click.option("-c", "--clear-metadata", is_flag=True, default=False)
|
||||||
|
@click.option("-d", "--denoise", is_flag=True, default=False)
|
||||||
|
|
||||||
|
|
||||||
|
def convert(ctx, paths, label, video_encoder, quality, preset, stereo_bitrate, ac3_bitrate, dts_bitrate, crop, clear_metadata, default_subtitle, default_audio, denoise):
|
||||||
|
"""Batch conversion of audiovideo files in format suitable for web playback, e.g. jellyfin
|
||||||
|
|
||||||
|
Files found under PATHS will be converted according to parameters.
|
||||||
|
Filename extensions will be changed appropriately.
|
||||||
|
Suffices will we appended to filename in case of multiple created files
|
||||||
|
or if the filename has not changed."""
|
||||||
|
|
||||||
|
startTime = time.perf_counter()
|
||||||
|
|
||||||
|
sourcePath = paths[0]
|
||||||
|
targetFilename = paths[1]
|
||||||
|
|
||||||
|
if not os.path.isfile(sourcePath):
|
||||||
|
raise click.ClickException(f"There is no file with path {sourcePath}")
|
||||||
|
|
||||||
|
click.echo(f"src: {sourcePath} tgt: {targetFilename}")
|
||||||
|
|
||||||
|
|
||||||
|
click.echo(f"ve={video_encoder}")
|
||||||
|
|
||||||
|
|
||||||
|
qualityTokens = quality.split(',')
|
||||||
|
|
||||||
|
q_list = [q for q in qualityTokens if q.isnumeric()]
|
||||||
|
|
||||||
|
click.echo(q_list)
|
||||||
|
|
||||||
|
ctx.obj['bitrates'] = {}
|
||||||
|
ctx.obj['bitrates']['stereo'] = str(stereo_bitrate) if str(stereo_bitrate).endswith('k') else f"{stereo_bitrate}k"
|
||||||
|
ctx.obj['bitrates']['ac3'] = str(ac3_bitrate) if str(ac3_bitrate).endswith('k') else f"{ac3_bitrate}k"
|
||||||
|
ctx.obj['bitrates']['dts'] = str(dts_bitrate) if str(dts_bitrate).endswith('k') else f"{dts_bitrate}k"
|
||||||
|
|
||||||
|
|
||||||
|
click.echo(f"a={ctx.obj['bitrates']['stereo']}")
|
||||||
|
click.echo(f"ac3={ctx.obj['bitrates']['ac3']}")
|
||||||
|
click.echo(f"dts={ctx.obj['bitrates']['dts']}")
|
||||||
|
|
||||||
|
|
||||||
|
performCrop = (crop != 'none')
|
||||||
|
|
||||||
|
|
||||||
|
if performCrop:
|
||||||
|
|
||||||
|
cropTokens = crop.split(',')
|
||||||
|
|
||||||
|
if cropTokens and len(cropTokens) == 2:
|
||||||
|
|
||||||
|
cropStart, cropLength = crop.split(',')
|
||||||
|
else:
|
||||||
|
cropStart = DEFAULT_CROP_START
|
||||||
|
cropLength = DEFAULT_CROP_LENGTH
|
||||||
|
|
||||||
|
click.echo(f"crop start={cropStart} length={cropLength}")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
click.echo(f"\nRunning {len(q_list)} jobs")
|
||||||
|
|
||||||
|
|
||||||
|
streamDescriptor = getStreamDescriptor(sourcePath)
|
||||||
|
|
||||||
|
commandTokens = COMMAND_TOKENS + [sourcePath]
|
||||||
|
|
||||||
|
|
||||||
|
for q in q_list:
|
||||||
|
|
||||||
|
click.echo(f"\nRunning job q={q}")
|
||||||
|
|
||||||
|
mappingVideoTokens = ['-map', 'v:0']
|
||||||
|
mappingTokens = mappingVideoTokens.copy()
|
||||||
|
audioTokens = []
|
||||||
|
|
||||||
|
audioIndex = 0
|
||||||
|
for audioStreamDescriptor in streamDescriptor:
|
||||||
|
|
||||||
|
if audioStreamDescriptor['type'] == STREAM_TYPE_AUDIO:
|
||||||
|
|
||||||
|
mappingTokens += ['-map', f"a:{audioIndex}"]
|
||||||
|
audioTokens += generateAudioTokens(ctx.obj, audioIndex, audioStreamDescriptor['layout'])
|
||||||
|
audioIndex += 1
|
||||||
|
|
||||||
|
|
||||||
|
for s in range(len([d for d in streamDescriptor if d['type'] == STREAM_TYPE_SUBTITLE])):
|
||||||
|
mappingTokens += ['-map', f"s:{s}"]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
if video_encoder == 'av1':
|
||||||
|
|
||||||
|
commandSequence = commandTokens + mappingTokens + audioTokens + generateAV1Tokens(q, preset) + audioTokens
|
||||||
|
|
||||||
|
if clear_metadata:
|
||||||
|
commandSequence += generateClearTokens(streamDescriptor)
|
||||||
|
|
||||||
|
if performCrop:
|
||||||
|
commandSequence += generateCropTokens(cropStart, cropLength)
|
||||||
|
|
||||||
|
commandSequence += generateOutputTokens(targetFilename, DEFAULT_FILE_SUFFIX, q)
|
||||||
|
|
||||||
|
click.echo(f"Command: {' '.join(commandSequence)}")
|
||||||
|
|
||||||
|
executeProcess(commandSequence)
|
||||||
|
|
||||||
|
|
||||||
|
if video_encoder == 'vp9':
|
||||||
|
|
||||||
|
commandSequence1 = commandTokens + mappingVideoTokens + generateVP9Pass1Tokens(q)
|
||||||
|
|
||||||
|
if performCrop:
|
||||||
|
commandSequence1 += generateCropTokens(cropStart, cropLength)
|
||||||
|
|
||||||
|
commandSequence1 += NULL_TOKENS
|
||||||
|
|
||||||
|
click.echo(f"Command 1: {' '.join(commandSequence1)}")
|
||||||
|
|
||||||
|
if os.path.exists(TEMP_FILE_NAME):
|
||||||
|
os.remove(TEMP_FILE_NAME)
|
||||||
|
|
||||||
|
executeProcess(commandSequence1)
|
||||||
|
|
||||||
|
|
||||||
|
commandSequence2 = commandTokens + mappingTokens
|
||||||
|
|
||||||
|
if denoise:
|
||||||
|
commandSequence2 += generateDenoiseTokens()
|
||||||
|
|
||||||
|
commandSequence2 += generateVP9Pass2Tokens(q) + audioTokens
|
||||||
|
|
||||||
|
if clear_metadata:
|
||||||
|
commandSequence2 += generateClearTokens(streamDescriptor)
|
||||||
|
|
||||||
|
if performCrop:
|
||||||
|
commandSequence2 += generateCropTokens(cropStart, cropLength)
|
||||||
|
|
||||||
|
commandSequence2 += generateOutputTokens(targetFilename, DEFAULT_FILE_SUFFIX, q)
|
||||||
|
|
||||||
|
click.echo(f"Command 2: {' '.join(commandSequence2)}")
|
||||||
|
|
||||||
|
executeProcess(commandSequence2)
|
||||||
|
|
||||||
|
|
||||||
|
click.echo('\nDONE\n')
|
||||||
|
|
||||||
|
endTime = time.perf_counter()
|
||||||
|
click.echo(f"Time elapsed {endTime - startTime}")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
ffx()
|
||||||
@@ -1,54 +0,0 @@
|
|||||||
[project]
|
|
||||||
name = "ffx"
|
|
||||||
description = "FFX recoding and metadata managing tool"
|
|
||||||
version = "0.2.4"
|
|
||||||
license = {file = "LICENSE.md"}
|
|
||||||
dependencies = [
|
|
||||||
"requests",
|
|
||||||
"jinja2",
|
|
||||||
"click",
|
|
||||||
"textual",
|
|
||||||
"sqlalchemy",
|
|
||||||
]
|
|
||||||
readme = {file = "README.md", content-type = "text/markdown"}
|
|
||||||
authors = [
|
|
||||||
{name = "Marius", email = "javanaut@maveno.de"}
|
|
||||||
]
|
|
||||||
maintainers = [
|
|
||||||
{name = "Marius", email = "javanaut@maveno.de"}
|
|
||||||
]
|
|
||||||
classifiers = [
|
|
||||||
"Development Status :: 3 - Alpha",
|
|
||||||
"Programming Language :: Python"
|
|
||||||
]
|
|
||||||
|
|
||||||
[project.urls]
|
|
||||||
Homepage = "https://gitea.maveno.de/Javanaut/ffx"
|
|
||||||
Repository = "https://gitea.maveno.de/Javanaut/ffx.git"
|
|
||||||
Issues = "https://gitea.maveno.de/Javanaut/ffx/issues"
|
|
||||||
|
|
||||||
[project.optional-dependencies]
|
|
||||||
test = [
|
|
||||||
"pytest",
|
|
||||||
]
|
|
||||||
|
|
||||||
[build-system]
|
|
||||||
requires = [
|
|
||||||
"setuptools",
|
|
||||||
"wheel"
|
|
||||||
]
|
|
||||||
build-backend = "setuptools.build_meta"
|
|
||||||
|
|
||||||
[project.scripts]
|
|
||||||
ffx = "ffx.cli:ffx"
|
|
||||||
|
|
||||||
[tool.pytest.ini_options]
|
|
||||||
testpaths = ["tests"]
|
|
||||||
python_files = ["test_*.py"]
|
|
||||||
norecursedirs = ["tests/legacy", "tests/support"]
|
|
||||||
addopts = "-ra"
|
|
||||||
markers = [
|
|
||||||
"integration: exercises the FFX bundle with real ffmpeg/ffprobe processes",
|
|
||||||
"pattern_management: covers requirements/pattern_management.md",
|
|
||||||
"subtrack_mapping: covers requirements/subtrack_mapping.md",
|
|
||||||
]
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
from .cli import ffx
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
ffx()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,71 +0,0 @@
|
|||||||
from enum import Enum
|
|
||||||
from .track_type import TrackType
|
|
||||||
|
|
||||||
class AudioLayout(Enum):
|
|
||||||
|
|
||||||
LAYOUT_STEREO = {"label": "stereo", "index": 1}
|
|
||||||
LAYOUT_5_1 = {"label": "5.1(side)", "index": 2}
|
|
||||||
LAYOUT_6_1 = {"label": "6.1", "index": 3}
|
|
||||||
LAYOUT_7_1 = {"label": "7.1", "index": 4} #TODO: Does this exist?
|
|
||||||
|
|
||||||
LAYOUT_6CH = {"label": "6ch", "index": 5}
|
|
||||||
LAYOUT_5_0 = {"label": "5.0(side)", "index": 6}
|
|
||||||
|
|
||||||
LAYOUT_UNDEFINED = {"label": "undefined", "index": 0}
|
|
||||||
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
"""Returns the audio layout as string"""
|
|
||||||
return str(self.value['label'])
|
|
||||||
|
|
||||||
def index(self):
|
|
||||||
"""Returns the audio layout as integer"""
|
|
||||||
return int(self.value['index'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromLabel(label : str):
|
|
||||||
try:
|
|
||||||
|
|
||||||
return [a for a in AudioLayout if a.value['label'] == str(label)][0]
|
|
||||||
except:
|
|
||||||
return AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
|
|
||||||
# @staticmethod
|
|
||||||
# def fromIndex(index : int):
|
|
||||||
# try:
|
|
||||||
# target_index = int(index)
|
|
||||||
# except (TypeError, ValueError):
|
|
||||||
# return AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
# return next((a for a in AudioLayout if a.value['index'] == target_index),
|
|
||||||
# AudioLayout.LAYOUT_UNDEFINED)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromIndex(index : int):
|
|
||||||
try:
|
|
||||||
return [a for a in AudioLayout if a.value['index'] == int(index)][0]
|
|
||||||
except:
|
|
||||||
return AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def identify(streamObj):
|
|
||||||
|
|
||||||
FFPROBE_LAYOUT_KEY = 'channel_layout'
|
|
||||||
FFPROBE_CHANNELS_KEY = 'channels'
|
|
||||||
FFPROBE_CODEC_TYPE_KEY = 'codec_type'
|
|
||||||
|
|
||||||
if (type(streamObj) is not dict
|
|
||||||
or FFPROBE_CODEC_TYPE_KEY not in streamObj.keys()
|
|
||||||
or streamObj[FFPROBE_CODEC_TYPE_KEY] != TrackType.AUDIO.label()):
|
|
||||||
raise Exception('Not an ffprobe audio stream object')
|
|
||||||
|
|
||||||
if FFPROBE_LAYOUT_KEY in streamObj.keys():
|
|
||||||
matchingLayouts = [l for l in AudioLayout if l.label() == streamObj[FFPROBE_LAYOUT_KEY]]
|
|
||||||
if matchingLayouts:
|
|
||||||
return matchingLayouts[0]
|
|
||||||
|
|
||||||
if (FFPROBE_CHANNELS_KEY in streamObj.keys()
|
|
||||||
and int(streamObj[FFPROBE_CHANNELS_KEY]) == 6):
|
|
||||||
|
|
||||||
return AudioLayout.LAYOUT_6CH
|
|
||||||
|
|
||||||
return AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
1396
src/ffx/cli.py
1396
src/ffx/cli.py
File diff suppressed because it is too large
Load Diff
@@ -1,196 +0,0 @@
|
|||||||
import os, json
|
|
||||||
|
|
||||||
from .constants import (
|
|
||||||
DEFAULT_SHOW_INDEX_EPISODE_DIGITS,
|
|
||||||
DEFAULT_SHOW_INDEX_SEASON_DIGITS,
|
|
||||||
DEFAULT_SHOW_INDICATOR_EPISODE_DIGITS,
|
|
||||||
DEFAULT_SHOW_INDICATOR_SEASON_DIGITS,
|
|
||||||
)
|
|
||||||
|
|
||||||
class ConfigurationController():
|
|
||||||
|
|
||||||
CONFIG_FILENAME = 'ffx.json'
|
|
||||||
DATABASE_FILENAME = 'ffx.db'
|
|
||||||
LOG_FILENAME = 'ffx.log'
|
|
||||||
|
|
||||||
DATABASE_PATH_CONFIG_KEY = 'databasePath'
|
|
||||||
LOG_DIRECTORY_CONFIG_KEY = 'logDirectory'
|
|
||||||
SUBTITLES_DIRECTORY_CONFIG_KEY = 'subtitlesDirectory'
|
|
||||||
OUTPUT_FILENAME_TEMPLATE_KEY = 'outputFilenameTemplate'
|
|
||||||
DEFAULT_INDEX_SEASON_DIGITS_CONFIG_KEY = 'defaultIndexSeasonDigits'
|
|
||||||
DEFAULT_INDEX_EPISODE_DIGITS_CONFIG_KEY = 'defaultIndexEpisodeDigits'
|
|
||||||
DEFAULT_INDICATOR_SEASON_DIGITS_CONFIG_KEY = 'defaultIndicatorSeasonDigits'
|
|
||||||
DEFAULT_INDICATOR_EPISODE_DIGITS_CONFIG_KEY = 'defaultIndicatorEpisodeDigits'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
|
|
||||||
self.__homeDir = os.path.expanduser("~")
|
|
||||||
self.__localVarDir = os.path.join(self.__homeDir, '.local', 'var')
|
|
||||||
self.__localEtcDir = os.path.join(self.__homeDir, '.local', 'etc')
|
|
||||||
|
|
||||||
self.__configurationData = {}
|
|
||||||
|
|
||||||
# .local/etc/ffx.json
|
|
||||||
self.__configFilePath = os.path.join(self.__localEtcDir, ConfigurationController.CONFIG_FILENAME)
|
|
||||||
if os.path.isfile(self.__configFilePath):
|
|
||||||
with open(self.__configFilePath, 'r') as configurationFile:
|
|
||||||
self.__configurationData = json.load(configurationFile)
|
|
||||||
|
|
||||||
if ConfigurationController.DATABASE_PATH_CONFIG_KEY in self.__configurationData.keys():
|
|
||||||
self.__databaseFilePath = self.__configurationData[ConfigurationController.DATABASE_PATH_CONFIG_KEY]
|
|
||||||
os.makedirs(os.path.dirname(self.__databaseFilePath), exist_ok=True)
|
|
||||||
else:
|
|
||||||
ffxVarDir = os.path.join(self.__localVarDir, 'ffx')
|
|
||||||
os.makedirs(ffxVarDir, exist_ok=True)
|
|
||||||
self.__databaseFilePath = os.path.join(ffxVarDir, ConfigurationController.DATABASE_FILENAME)
|
|
||||||
|
|
||||||
if ConfigurationController.LOG_DIRECTORY_CONFIG_KEY in self.__configurationData.keys():
|
|
||||||
self.__logDir = self.__configurationData[ConfigurationController.LOG_DIRECTORY_CONFIG_KEY]
|
|
||||||
else:
|
|
||||||
self.__logDir = os.path.join(self.__localVarDir, 'log')
|
|
||||||
os.makedirs(self.__logDir, exist_ok=True)
|
|
||||||
|
|
||||||
|
|
||||||
def getHomeDirectory(self):
|
|
||||||
return self.__homeDir
|
|
||||||
|
|
||||||
def getLogFilePath(self):
|
|
||||||
return os.path.join(self.__logDir, ConfigurationController.LOG_FILENAME)
|
|
||||||
|
|
||||||
def getDatabaseFilePath(self):
|
|
||||||
return self.__databaseFilePath
|
|
||||||
|
|
||||||
def getSubtitlesDirectoryPath(self):
|
|
||||||
subtitlesDirectory = self.__configurationData.get(
|
|
||||||
ConfigurationController.SUBTITLES_DIRECTORY_CONFIG_KEY,
|
|
||||||
'',
|
|
||||||
)
|
|
||||||
return os.path.expanduser(str(subtitlesDirectory)) if subtitlesDirectory else ''
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def getConfiguredIntegerValue(cls, configurationData: dict, configKey: str, defaultValue: int) -> int:
|
|
||||||
configuredValue = configurationData.get(configKey, defaultValue)
|
|
||||||
try:
|
|
||||||
return int(configuredValue)
|
|
||||||
except (TypeError, ValueError):
|
|
||||||
return int(defaultValue)
|
|
||||||
|
|
||||||
def getDefaultIndexSeasonDigits(self):
|
|
||||||
return ConfigurationController.getConfiguredIntegerValue(
|
|
||||||
self.__configurationData,
|
|
||||||
ConfigurationController.DEFAULT_INDEX_SEASON_DIGITS_CONFIG_KEY,
|
|
||||||
DEFAULT_SHOW_INDEX_SEASON_DIGITS,
|
|
||||||
)
|
|
||||||
|
|
||||||
def getDefaultIndexEpisodeDigits(self):
|
|
||||||
return ConfigurationController.getConfiguredIntegerValue(
|
|
||||||
self.__configurationData,
|
|
||||||
ConfigurationController.DEFAULT_INDEX_EPISODE_DIGITS_CONFIG_KEY,
|
|
||||||
DEFAULT_SHOW_INDEX_EPISODE_DIGITS,
|
|
||||||
)
|
|
||||||
|
|
||||||
def getDefaultIndicatorSeasonDigits(self):
|
|
||||||
return ConfigurationController.getConfiguredIntegerValue(
|
|
||||||
self.__configurationData,
|
|
||||||
ConfigurationController.DEFAULT_INDICATOR_SEASON_DIGITS_CONFIG_KEY,
|
|
||||||
DEFAULT_SHOW_INDICATOR_SEASON_DIGITS,
|
|
||||||
)
|
|
||||||
|
|
||||||
def getDefaultIndicatorEpisodeDigits(self):
|
|
||||||
return ConfigurationController.getConfiguredIntegerValue(
|
|
||||||
self.__configurationData,
|
|
||||||
ConfigurationController.DEFAULT_INDICATOR_EPISODE_DIGITS_CONFIG_KEY,
|
|
||||||
DEFAULT_SHOW_INDICATOR_EPISODE_DIGITS,
|
|
||||||
)
|
|
||||||
|
|
||||||
def getData(self):
|
|
||||||
return self.__configurationData
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# def addPattern(self, patternDescriptor):
|
|
||||||
#
|
|
||||||
# try:
|
|
||||||
#
|
|
||||||
# s = self.Session()
|
|
||||||
# q = s.query(Pattern).filter(Pattern.show_id == int(patternDescriptor['show_id']),
|
|
||||||
# Pattern.pattern == str(patternDescriptor['pattern']))
|
|
||||||
#
|
|
||||||
# if not q.count():
|
|
||||||
# pattern = Pattern(show_id = int(patternDescriptor['show_id']),
|
|
||||||
# pattern = str(patternDescriptor['pattern']))
|
|
||||||
# s.add(pattern)
|
|
||||||
# s.commit()
|
|
||||||
# return pattern.getId()
|
|
||||||
# else:
|
|
||||||
# return 0
|
|
||||||
#
|
|
||||||
# except Exception as ex:
|
|
||||||
# raise click.ClickException(f"PatternController.addPattern(): {repr(ex)}")
|
|
||||||
# finally:
|
|
||||||
# s.close()
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# def updatePattern(self, patternId, patternDescriptor):
|
|
||||||
#
|
|
||||||
# try:
|
|
||||||
# s = self.Session()
|
|
||||||
# q = s.query(Pattern).filter(Pattern.id == int(patternId))
|
|
||||||
#
|
|
||||||
# if q.count():
|
|
||||||
#
|
|
||||||
# pattern = q.first()
|
|
||||||
#
|
|
||||||
# pattern.show_id = int(patternDescriptor['show_id'])
|
|
||||||
# pattern.pattern = str(patternDescriptor['pattern'])
|
|
||||||
#
|
|
||||||
# s.commit()
|
|
||||||
# return True
|
|
||||||
#
|
|
||||||
# else:
|
|
||||||
# return False
|
|
||||||
#
|
|
||||||
# except Exception as ex:
|
|
||||||
# raise click.ClickException(f"PatternController.updatePattern(): {repr(ex)}")
|
|
||||||
# finally:
|
|
||||||
# s.close()
|
|
||||||
#
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# def findPattern(self, patternDescriptor):
|
|
||||||
#
|
|
||||||
# try:
|
|
||||||
# s = self.Session()
|
|
||||||
# q = s.query(Pattern).filter(Pattern.show_id == int(patternDescriptor['show_id']), Pattern.pattern == str(patternDescriptor['pattern']))
|
|
||||||
#
|
|
||||||
# if q.count():
|
|
||||||
# pattern = q.first()
|
|
||||||
# return int(pattern.id)
|
|
||||||
# else:
|
|
||||||
# return None
|
|
||||||
#
|
|
||||||
# except Exception as ex:
|
|
||||||
# raise click.ClickException(f"PatternController.findPattern(): {repr(ex)}")
|
|
||||||
# finally:
|
|
||||||
# s.close()
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# def getPattern(self, patternId : int):
|
|
||||||
#
|
|
||||||
# if type(patternId) is not int:
|
|
||||||
# raise ValueError(f"PatternController.getPattern(): Argument patternId is required to be of type int")
|
|
||||||
#
|
|
||||||
# try:
|
|
||||||
# s = self.Session()
|
|
||||||
# q = s.query(Pattern).filter(Pattern.id == int(patternId))
|
|
||||||
#
|
|
||||||
# return q.first() if q.count() else None
|
|
||||||
#
|
|
||||||
# except Exception as ex:
|
|
||||||
# raise click.ClickException(f"PatternController.getPattern(): {repr(ex)}")
|
|
||||||
# finally:
|
|
||||||
# s.close()
|
|
||||||
#
|
|
||||||
@@ -1,30 +0,0 @@
|
|||||||
VERSION='0.2.4'
|
|
||||||
DATABASE_VERSION = 2
|
|
||||||
|
|
||||||
DEFAULT_QUALITY = 32
|
|
||||||
DEFAULT_AV1_PRESET = 5
|
|
||||||
|
|
||||||
DEFAULT_VIDEO_ENCODER_LABEL = "vp9"
|
|
||||||
DEFAULT_CONTAINER_FORMAT = "webm"
|
|
||||||
DEFAULT_CONTAINER_EXTENSION = "webm"
|
|
||||||
SUPPORTED_INPUT_FILE_EXTENSIONS = ("mkv", "mp4", "avi", "flv", "webm")
|
|
||||||
FFMPEG_COMMAND_TOKENS = ("ffmpeg", "-y")
|
|
||||||
FFMPEG_NULL_OUTPUT_TOKENS = ("-f", "null", "/dev/null")
|
|
||||||
|
|
||||||
DEFAULT_STEREO_BANDWIDTH = "112"
|
|
||||||
DEFAULT_AC3_BANDWIDTH = "256"
|
|
||||||
DEFAULT_DTS_BANDWIDTH = "320"
|
|
||||||
DEFAULT_7_1_BANDWIDTH = "384"
|
|
||||||
|
|
||||||
DEFAULT_CROPDETECT_SEEK_SECONDS = 60
|
|
||||||
DEFAULT_CROPDETECT_DURATION_SECONDS = 180
|
|
||||||
|
|
||||||
DEFAULT_cut_start = 60
|
|
||||||
DEFAULT_cut_length = 180
|
|
||||||
|
|
||||||
DEFAULT_SHOW_INDEX_SEASON_DIGITS = 2
|
|
||||||
DEFAULT_SHOW_INDEX_EPISODE_DIGITS = 2
|
|
||||||
DEFAULT_SHOW_INDICATOR_SEASON_DIGITS = 2
|
|
||||||
DEFAULT_SHOW_INDICATOR_EPISODE_DIGITS = 2
|
|
||||||
|
|
||||||
DEFAULT_OUTPUT_FILENAME_TEMPLATE = '{{ ffx_show_name }} - {{ ffx_index }}{{ ffx_index_separator }}{{ ffx_episode_name }}{{ ffx_indicator_separator }}{{ ffx_indicator }}'
|
|
||||||
@@ -1,119 +0,0 @@
|
|||||||
import os, click
|
|
||||||
|
|
||||||
from sqlalchemy import create_engine, inspect
|
|
||||||
from sqlalchemy.orm import sessionmaker
|
|
||||||
|
|
||||||
# Import the full model package so SQLAlchemy registers every mapped class
|
|
||||||
# before metadata creation and the first ORM query.
|
|
||||||
import ffx.model
|
|
||||||
from ffx.model.show import Base
|
|
||||||
|
|
||||||
from ffx.model.property import Property
|
|
||||||
|
|
||||||
from ffx.constants import DATABASE_VERSION
|
|
||||||
|
|
||||||
|
|
||||||
DATABASE_VERSION_KEY = 'database_version'
|
|
||||||
EXPECTED_TABLE_NAMES = set(Base.metadata.tables.keys())
|
|
||||||
|
|
||||||
class DatabaseVersionException(Exception):
|
|
||||||
def __init__(self, errorMessage):
|
|
||||||
super().__init__(errorMessage)
|
|
||||||
|
|
||||||
def databaseContext(databasePath: str = ''):
|
|
||||||
|
|
||||||
databaseContext = {}
|
|
||||||
|
|
||||||
if databasePath is None:
|
|
||||||
# sqlite:///:memory:
|
|
||||||
databasePath = ':memory:'
|
|
||||||
elif not databasePath:
|
|
||||||
homeDir = os.path.expanduser("~")
|
|
||||||
ffxVarDir = os.path.join(homeDir, '.local', 'var', 'ffx')
|
|
||||||
if not os.path.exists(ffxVarDir):
|
|
||||||
os.makedirs(ffxVarDir)
|
|
||||||
databasePath = os.path.join(ffxVarDir, 'ffx.db')
|
|
||||||
|
|
||||||
databaseContext['url'] = f"sqlite:///{databasePath}"
|
|
||||||
databaseContext['engine'] = create_engine(databaseContext['url'])
|
|
||||||
databaseContext['session'] = sessionmaker(bind=databaseContext['engine'])
|
|
||||||
|
|
||||||
bootstrapDatabaseIfNeeded(databaseContext)
|
|
||||||
|
|
||||||
# isSyncronuous = False
|
|
||||||
# while not isSyncronuous:
|
|
||||||
# while True:
|
|
||||||
# try:
|
|
||||||
# with databaseContext['database_engine'].connect() as connection:
|
|
||||||
# connection.execute(sqlalchemy.text('PRAGMA foreign_keys=ON;'))
|
|
||||||
# #isSyncronuous = True
|
|
||||||
# break
|
|
||||||
# except sqlite3.OperationalError:
|
|
||||||
# time.sleep(0.1)
|
|
||||||
|
|
||||||
ensureDatabaseVersion(databaseContext)
|
|
||||||
|
|
||||||
return databaseContext
|
|
||||||
|
|
||||||
|
|
||||||
def databaseNeedsBootstrap(databaseContext) -> bool:
|
|
||||||
inspector = inspect(databaseContext['engine'])
|
|
||||||
existingTableNames = set(inspector.get_table_names())
|
|
||||||
return not EXPECTED_TABLE_NAMES.issubset(existingTableNames)
|
|
||||||
|
|
||||||
|
|
||||||
def bootstrapDatabaseIfNeeded(databaseContext):
|
|
||||||
if not databaseNeedsBootstrap(databaseContext):
|
|
||||||
return
|
|
||||||
|
|
||||||
Base.metadata.create_all(databaseContext['engine'])
|
|
||||||
|
|
||||||
def ensureDatabaseVersion(databaseContext):
|
|
||||||
|
|
||||||
currentDatabaseVersion = getDatabaseVersion(databaseContext)
|
|
||||||
if currentDatabaseVersion:
|
|
||||||
if currentDatabaseVersion != DATABASE_VERSION:
|
|
||||||
raise DatabaseVersionException(f"Current database version ({currentDatabaseVersion}) does not match required ({DATABASE_VERSION})")
|
|
||||||
else:
|
|
||||||
setDatabaseVersion(databaseContext, DATABASE_VERSION)
|
|
||||||
|
|
||||||
|
|
||||||
def getDatabaseVersion(databaseContext):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
Session = databaseContext['session']
|
|
||||||
s = Session()
|
|
||||||
versionProperty = s.query(Property).filter(Property.key == DATABASE_VERSION_KEY).first()
|
|
||||||
|
|
||||||
return int(versionProperty.value) if versionProperty is not None else 0
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"getDatabaseVersion(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def setDatabaseVersion(databaseContext, databaseVersion: int):
|
|
||||||
|
|
||||||
try:
|
|
||||||
Session = databaseContext['session']
|
|
||||||
s = Session()
|
|
||||||
|
|
||||||
q = s.query(Property).filter(Property.key == DATABASE_VERSION_KEY)
|
|
||||||
|
|
||||||
dbVersion = int(databaseVersion)
|
|
||||||
|
|
||||||
versionProperty = q.first()
|
|
||||||
if versionProperty:
|
|
||||||
versionProperty.value = str(dbVersion)
|
|
||||||
else:
|
|
||||||
versionProperty = Property(key = DATABASE_VERSION_KEY,
|
|
||||||
value = str(dbVersion))
|
|
||||||
s.add(versionProperty)
|
|
||||||
s.commit()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"setDatabaseVersion(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
from textual.app import App
|
|
||||||
|
|
||||||
from .shows_screen import ShowsScreen
|
|
||||||
from .media_details_screen import MediaDetailsScreen
|
|
||||||
|
|
||||||
|
|
||||||
class FfxApp(App):
|
|
||||||
|
|
||||||
TITLE = "FFX"
|
|
||||||
|
|
||||||
BINDINGS = [
|
|
||||||
("q", "quit()", "Quit"),
|
|
||||||
("h", "switch_mode('help')", "Help"),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = {}):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
# Data 'input' variable
|
|
||||||
self.context = context
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self) -> None:
|
|
||||||
|
|
||||||
if 'command' in self.context.keys():
|
|
||||||
|
|
||||||
if self.context['command'] == 'shows':
|
|
||||||
self.push_screen(ShowsScreen())
|
|
||||||
|
|
||||||
if self.context['command'] == 'inspect':
|
|
||||||
self.push_screen(MediaDetailsScreen())
|
|
||||||
|
|
||||||
|
|
||||||
def getContext(self):
|
|
||||||
"""Data 'output' method"""
|
|
||||||
return self.context
|
|
||||||
|
|
||||||
@@ -1,468 +0,0 @@
|
|||||||
import os, click
|
|
||||||
from logging import Logger
|
|
||||||
|
|
||||||
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.audio_layout import AudioLayout
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
from ffx.video_encoder import VideoEncoder
|
|
||||||
from ffx.process import executeProcess
|
|
||||||
|
|
||||||
from ffx.constants import (
|
|
||||||
DEFAULT_CONTAINER_EXTENSION,
|
|
||||||
DEFAULT_CONTAINER_FORMAT,
|
|
||||||
DEFAULT_VIDEO_ENCODER_LABEL,
|
|
||||||
DEFAULT_cut_start,
|
|
||||||
DEFAULT_cut_length,
|
|
||||||
FFMPEG_COMMAND_TOKENS,
|
|
||||||
FFMPEG_NULL_OUTPUT_TOKENS,
|
|
||||||
SUPPORTED_INPUT_FILE_EXTENSIONS,
|
|
||||||
)
|
|
||||||
|
|
||||||
from ffx.filter.quality_filter import QualityFilter
|
|
||||||
from ffx.filter.preset_filter import PresetFilter
|
|
||||||
from ffx.filter.crop_filter import CropFilter
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
|
|
||||||
|
|
||||||
class FfxController():
|
|
||||||
|
|
||||||
COMMAND_TOKENS = list(FFMPEG_COMMAND_TOKENS)
|
|
||||||
NULL_TOKENS = list(FFMPEG_NULL_OUTPUT_TOKENS) # -f null /dev/null
|
|
||||||
|
|
||||||
TEMP_FILE_NAME = "ffmpeg2pass-0.log"
|
|
||||||
|
|
||||||
DEFAULT_VIDEO_ENCODER = DEFAULT_VIDEO_ENCODER_LABEL
|
|
||||||
|
|
||||||
DEFAULT_FILE_FORMAT = DEFAULT_CONTAINER_FORMAT
|
|
||||||
DEFAULT_FILE_EXTENSION = DEFAULT_CONTAINER_EXTENSION
|
|
||||||
|
|
||||||
INPUT_FILE_EXTENSIONS = list(SUPPORTED_INPUT_FILE_EXTENSIONS)
|
|
||||||
|
|
||||||
CHANNEL_MAP_5_1 = 'FL-FL|FR-FR|FC-FC|LFE-LFE|SL-BL|SR-BR:5.1'
|
|
||||||
|
|
||||||
# SIGNATURE_TAGS = {'RECODED_WITH': 'FFX'}
|
|
||||||
|
|
||||||
def __init__(self,
|
|
||||||
context : dict,
|
|
||||||
targetMediaDescriptor : MediaDescriptor,
|
|
||||||
sourceMediaDescriptor : MediaDescriptor = None):
|
|
||||||
|
|
||||||
self.__context = context
|
|
||||||
|
|
||||||
self.__targetMediaDescriptor = targetMediaDescriptor
|
|
||||||
self.__sourceMediaDescriptor = sourceMediaDescriptor
|
|
||||||
|
|
||||||
self.__mdcs = MediaDescriptorChangeSet(context,
|
|
||||||
targetMediaDescriptor,
|
|
||||||
sourceMediaDescriptor)
|
|
||||||
|
|
||||||
self.__logger: Logger = context['logger']
|
|
||||||
|
|
||||||
|
|
||||||
def executeCommandSequence(self, commandSequence):
|
|
||||||
out, err, rc = executeProcess(commandSequence, context=self.__context)
|
|
||||||
if rc:
|
|
||||||
raise click.ClickException(f"Command resulted in error: rc={rc} error={err}")
|
|
||||||
return out, err, rc
|
|
||||||
|
|
||||||
|
|
||||||
def generateAV1Tokens(self, quality, preset, subIndex : int = 0):
|
|
||||||
|
|
||||||
return [f"-c:v:{int(subIndex)}", 'libsvtav1',
|
|
||||||
'-svtav1-params', f"crf={quality}:preset={preset}:tune=0:enable-overlays=1:scd=1:scm=0",
|
|
||||||
'-pix_fmt', 'yuv420p10le']
|
|
||||||
|
|
||||||
|
|
||||||
# -c:v libx264 -preset slow -crf 17
|
|
||||||
def generateH264Tokens(self, quality, subIndex : int = 0):
|
|
||||||
|
|
||||||
return [f"-c:v:{int(subIndex)}", 'libx264',
|
|
||||||
"-preset", "slow",
|
|
||||||
'-crf', str(quality)]
|
|
||||||
|
|
||||||
|
|
||||||
# -c:v:0 libvpx-vp9 -row-mt 1 -crf 32 -pass 1 -speed 4 -frame-parallel 0 -g 9999 -aq-mode 0
|
|
||||||
def generateVP9Pass1Tokens(self, quality, subIndex : int = 0):
|
|
||||||
|
|
||||||
return [f"-c:v:{int(subIndex)}",
|
|
||||||
'libvpx-vp9',
|
|
||||||
'-row-mt', '1',
|
|
||||||
'-crf', str(quality),
|
|
||||||
'-pass', '1',
|
|
||||||
'-speed', '4',
|
|
||||||
'-frame-parallel', '0',
|
|
||||||
'-g', '9999',
|
|
||||||
'-aq-mode', '0']
|
|
||||||
|
|
||||||
# -c:v:0 libvpx-vp9 -row-mt 1 -crf 32 -pass 2 -frame-parallel 0 -g 9999 -aq-mode 0 -auto-alt-ref 1 -lag-in-frames 25
|
|
||||||
def generateVP9Pass2Tokens(self, quality, subIndex : int = 0):
|
|
||||||
|
|
||||||
return [f"-c:v:{int(subIndex)}",
|
|
||||||
'libvpx-vp9',
|
|
||||||
'-row-mt', '1',
|
|
||||||
'-crf', str(quality),
|
|
||||||
'-pass', '2',
|
|
||||||
'-frame-parallel', '0',
|
|
||||||
'-g', '9999',
|
|
||||||
'-aq-mode', '0',
|
|
||||||
'-auto-alt-ref', '1',
|
|
||||||
'-lag-in-frames', '25']
|
|
||||||
|
|
||||||
def generateVideoCopyTokens(self, subIndex):
|
|
||||||
return [f"-c:v:{int(subIndex)}",
|
|
||||||
'copy']
|
|
||||||
|
|
||||||
def generateAudioCopyTokens(self, subIndex):
|
|
||||||
return [f"-c:a:{int(subIndex)}", 'copy']
|
|
||||||
|
|
||||||
def generateSubtitleCopyTokens(self, subIndex):
|
|
||||||
return [f"-c:s:{int(subIndex)}", 'copy']
|
|
||||||
|
|
||||||
def generateAttachmentCopyTokens(self, subIndex):
|
|
||||||
return [f"-c:t:{int(subIndex)}", 'copy']
|
|
||||||
|
|
||||||
def generateCopyTokens(self):
|
|
||||||
copyTokens = []
|
|
||||||
|
|
||||||
for trackDescriptor in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
|
|
||||||
copyTokens += self.generateVideoCopyTokens(trackDescriptor.getSubIndex())
|
|
||||||
|
|
||||||
for trackDescriptor in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.AUDIO):
|
|
||||||
copyTokens += self.generateAudioCopyTokens(trackDescriptor.getSubIndex())
|
|
||||||
|
|
||||||
for trackDescriptor in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.SUBTITLE):
|
|
||||||
copyTokens += self.generateSubtitleCopyTokens(trackDescriptor.getSubIndex())
|
|
||||||
|
|
||||||
attachmentDescriptors = (
|
|
||||||
self.__sourceMediaDescriptor.getTrackDescriptors(trackType=TrackType.ATTACHMENT)
|
|
||||||
if self.__sourceMediaDescriptor is not None
|
|
||||||
else self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.ATTACHMENT)
|
|
||||||
)
|
|
||||||
for trackDescriptor in attachmentDescriptors:
|
|
||||||
copyTokens += self.generateAttachmentCopyTokens(trackDescriptor.getSubIndex())
|
|
||||||
|
|
||||||
return copyTokens
|
|
||||||
|
|
||||||
|
|
||||||
def generateCropTokens(self):
|
|
||||||
|
|
||||||
if 'cut_start' in self.__context.keys() and 'cut_length' in self.__context.keys():
|
|
||||||
cropStart = int(self.__context['cut_start'])
|
|
||||||
cropLength = int(self.__context['cut_length'])
|
|
||||||
else:
|
|
||||||
cropStart = DEFAULT_cut_start
|
|
||||||
cropLength = DEFAULT_cut_length
|
|
||||||
|
|
||||||
return ['-ss', str(cropStart), '-t', str(cropLength)]
|
|
||||||
|
|
||||||
|
|
||||||
def generateOutputTokens(self, filePathBase, format = '', ext = ''):
|
|
||||||
|
|
||||||
self.__logger.debug(f"FfxController.generateOutputTokens(): base='{filePathBase}' format='{format}' ext='{ext}'")
|
|
||||||
|
|
||||||
outputFilePath = f"{filePathBase}{('.'+str(ext)) if ext else ''}"
|
|
||||||
if format:
|
|
||||||
return ['-f', format, outputFilePath]
|
|
||||||
else:
|
|
||||||
return [outputFilePath]
|
|
||||||
|
|
||||||
|
|
||||||
def generateEncodingMetadataTags(self, videoEncoder: VideoEncoder, quality, preset) -> dict:
|
|
||||||
metadataTags = {}
|
|
||||||
|
|
||||||
if videoEncoder in (VideoEncoder.AV1, VideoEncoder.H264, VideoEncoder.VP9):
|
|
||||||
metadataTags["ENCODING_QUALITY"] = str(quality)
|
|
||||||
|
|
||||||
if videoEncoder == VideoEncoder.AV1:
|
|
||||||
metadataTags["ENCODING_PRESET"] = str(preset)
|
|
||||||
|
|
||||||
return metadataTags
|
|
||||||
|
|
||||||
|
|
||||||
def generateAudioEncodingTokens(self):
|
|
||||||
"""Generates ffmpeg options audio streams including channel remapping, codec and bitrate"""
|
|
||||||
|
|
||||||
audioTokens = []
|
|
||||||
|
|
||||||
# targetAudioTrackDescriptors = [td for td in self.__targetMediaDescriptor.getAllTrackDescriptors() if td.getType() == TrackType.AUDIO]
|
|
||||||
targetAudioTrackDescriptors = self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.AUDIO)
|
|
||||||
|
|
||||||
trackSubIndex = 0
|
|
||||||
for trackDescriptor in targetAudioTrackDescriptors:
|
|
||||||
|
|
||||||
trackAudioLayout = trackDescriptor.getAudioLayout()
|
|
||||||
|
|
||||||
if trackAudioLayout == AudioLayout.LAYOUT_6_1:
|
|
||||||
audioTokens += [f"-c:a:{trackSubIndex}",
|
|
||||||
'libopus',
|
|
||||||
f"-filter:a:{trackSubIndex}",
|
|
||||||
'channelmap=channel_layout=6.1',
|
|
||||||
f"-b:a:{trackSubIndex}",
|
|
||||||
self.__context['bitrates']['dts']]
|
|
||||||
|
|
||||||
if trackAudioLayout == AudioLayout.LAYOUT_5_1:
|
|
||||||
audioTokens += [f"-c:a:{trackSubIndex}",
|
|
||||||
'libopus',
|
|
||||||
f"-filter:a:{trackSubIndex}",
|
|
||||||
f"channelmap={FfxController.CHANNEL_MAP_5_1}",
|
|
||||||
f"-b:a:{trackSubIndex}",
|
|
||||||
self.__context['bitrates']['ac3']]
|
|
||||||
|
|
||||||
if trackAudioLayout == AudioLayout.LAYOUT_STEREO:
|
|
||||||
audioTokens += [f"-c:a:{trackSubIndex}",
|
|
||||||
'libopus',
|
|
||||||
f"-b:a:{trackSubIndex}",
|
|
||||||
self.__context['bitrates']['stereo']]
|
|
||||||
|
|
||||||
if trackAudioLayout == AudioLayout.LAYOUT_6CH:
|
|
||||||
audioTokens += [f"-c:a:{trackSubIndex}",
|
|
||||||
'libopus',
|
|
||||||
f"-filter:a:{trackSubIndex}",
|
|
||||||
f"channelmap={FfxController.CHANNEL_MAP_5_1}",
|
|
||||||
f"-b:a:{trackSubIndex}",
|
|
||||||
self.__context['bitrates']['ac3']]
|
|
||||||
|
|
||||||
# -ac 5 ?
|
|
||||||
if trackAudioLayout == AudioLayout.LAYOUT_5_0:
|
|
||||||
audioTokens += [f"-c:a:{trackSubIndex}",
|
|
||||||
'libopus',
|
|
||||||
f"-filter:a:{trackSubIndex}",
|
|
||||||
'channelmap=channel_layout=5.0',
|
|
||||||
f"-b:a:{trackSubIndex}",
|
|
||||||
self.__context['bitrates']['ac3']]
|
|
||||||
|
|
||||||
trackSubIndex += 1
|
|
||||||
return audioTokens
|
|
||||||
|
|
||||||
|
|
||||||
def runJob(self,
|
|
||||||
sourcePath,
|
|
||||||
targetPath,
|
|
||||||
targetFormat: str = '',
|
|
||||||
chainIteration: list = [],
|
|
||||||
cropArguments: dict = {},
|
|
||||||
currentPattern: Pattern = None):
|
|
||||||
# quality: int = DEFAULT_QUALITY,
|
|
||||||
# preset: int = DEFAULT_AV1_PRESET):
|
|
||||||
|
|
||||||
|
|
||||||
videoEncoder: VideoEncoder = self.__context.get('video_encoder', VideoEncoder.VP9)
|
|
||||||
|
|
||||||
|
|
||||||
qualityFilters = [fy for fy in chainIteration if fy['identifier'] == 'quality']
|
|
||||||
presetFilters = [fy for fy in chainIteration if fy['identifier'] == 'preset']
|
|
||||||
|
|
||||||
cropFilters = [fy for fy in chainIteration if fy['identifier'] == 'crop']
|
|
||||||
denoiseFilters = [fy for fy in chainIteration if fy['identifier'] == 'nlmeans']
|
|
||||||
deinterlaceFilters = [fy for fy in chainIteration if fy['identifier'] == 'bwdif']
|
|
||||||
|
|
||||||
|
|
||||||
if qualityFilters and (quality := qualityFilters[0]['parameters']['quality']):
|
|
||||||
self.__logger.info(f"Setting quality {quality} from command line parameter")
|
|
||||||
elif currentPattern is not None and (quality := currentPattern.quality):
|
|
||||||
self.__logger.info(f"Setting quality {quality} from pattern default")
|
|
||||||
else:
|
|
||||||
quality = (QualityFilter.DEFAULT_H264_QUALITY
|
|
||||||
if (videoEncoder == VideoEncoder.H264)
|
|
||||||
else QualityFilter.DEFAULT_VP9_QUALITY)
|
|
||||||
self.__logger.info(f"Setting quality {quality} from default")
|
|
||||||
|
|
||||||
|
|
||||||
preset = presetFilters[0]['parameters']['preset'] if presetFilters else PresetFilter.DEFAULT_PRESET
|
|
||||||
self.__context['encoding_metadata_tags'] = self.generateEncodingMetadataTags(
|
|
||||||
videoEncoder,
|
|
||||||
quality,
|
|
||||||
preset,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
filterParamTokens = []
|
|
||||||
|
|
||||||
if cropArguments:
|
|
||||||
|
|
||||||
cropParams = (f"crop="
|
|
||||||
+ f"{cropArguments[CropFilter.OUTPUT_WIDTH_KEY]}"
|
|
||||||
+ f":{cropArguments[CropFilter.OUTPUT_HEIGHT_KEY]}"
|
|
||||||
+ f":{cropArguments[CropFilter.OFFSET_X_KEY]}"
|
|
||||||
+ f":{cropArguments[CropFilter.OFFSET_Y_KEY]}")
|
|
||||||
|
|
||||||
filterParamTokens.append(cropParams)
|
|
||||||
|
|
||||||
filterParamTokens.extend(denoiseFilters[0]['tokens'] if denoiseFilters else [])
|
|
||||||
filterParamTokens.extend(deinterlaceFilters[0]['tokens'] if deinterlaceFilters else [])
|
|
||||||
|
|
||||||
deinterlaceFilters
|
|
||||||
|
|
||||||
filterTokens = ['-vf', ', '.join(filterParamTokens)] if filterParamTokens else []
|
|
||||||
|
|
||||||
|
|
||||||
commandTokens = FfxController.COMMAND_TOKENS + ['-i', sourcePath]
|
|
||||||
|
|
||||||
if videoEncoder == VideoEncoder.COPY:
|
|
||||||
|
|
||||||
commandSequence = (commandTokens
|
|
||||||
+ self.__targetMediaDescriptor.getImportFileTokens()
|
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens(sourceMediaDescriptor = self.__sourceMediaDescriptor)
|
|
||||||
+ self.__mdcs.generateDispositionTokens())
|
|
||||||
|
|
||||||
commandSequence += self.__mdcs.generateMetadataTokens()
|
|
||||||
commandSequence += self.generateCopyTokens()
|
|
||||||
|
|
||||||
if self.__context['perform_cut']:
|
|
||||||
commandSequence += self.generateCropTokens()
|
|
||||||
|
|
||||||
commandSequence += self.generateOutputTokens(targetPath,
|
|
||||||
targetFormat)
|
|
||||||
|
|
||||||
self.__logger.debug("FfxController.runJob(): Running command sequence")
|
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
|
||||||
self.executeCommandSequence(commandSequence)
|
|
||||||
return
|
|
||||||
|
|
||||||
if videoEncoder == VideoEncoder.AV1:
|
|
||||||
|
|
||||||
commandSequence = (commandTokens
|
|
||||||
+ self.__targetMediaDescriptor.getImportFileTokens()
|
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens(sourceMediaDescriptor = self.__sourceMediaDescriptor)
|
|
||||||
+ self.__mdcs.generateDispositionTokens())
|
|
||||||
|
|
||||||
# Optional tokens
|
|
||||||
commandSequence += self.__mdcs.generateMetadataTokens()
|
|
||||||
commandSequence += filterTokens
|
|
||||||
|
|
||||||
for td in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
|
|
||||||
#HINT: Attached thumbnails are not supported by .webm container format
|
|
||||||
if td.getCodec != TrackCodec.PNG:
|
|
||||||
commandSequence += self.generateAV1Tokens(int(quality), int(preset))
|
|
||||||
|
|
||||||
commandSequence += self.generateAudioEncodingTokens()
|
|
||||||
|
|
||||||
if self.__context['perform_cut']:
|
|
||||||
commandSequence += self.generateCropTokens()
|
|
||||||
|
|
||||||
commandSequence += self.generateOutputTokens(targetPath,
|
|
||||||
targetFormat)
|
|
||||||
|
|
||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence")
|
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
|
||||||
self.executeCommandSequence(commandSequence)
|
|
||||||
|
|
||||||
|
|
||||||
if videoEncoder == VideoEncoder.H264:
|
|
||||||
|
|
||||||
commandSequence = (commandTokens
|
|
||||||
+ self.__targetMediaDescriptor.getImportFileTokens()
|
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens(sourceMediaDescriptor = self.__sourceMediaDescriptor)
|
|
||||||
+ self.__mdcs.generateDispositionTokens())
|
|
||||||
|
|
||||||
# Optional tokens
|
|
||||||
commandSequence += self.__mdcs.generateMetadataTokens()
|
|
||||||
commandSequence += filterTokens
|
|
||||||
|
|
||||||
for td in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
|
|
||||||
#HINT: Attached thumbnails are not supported by .webm container format
|
|
||||||
if td.getCodec != TrackCodec.PNG:
|
|
||||||
commandSequence += self.generateH264Tokens(int(quality))
|
|
||||||
|
|
||||||
commandSequence += self.generateAudioEncodingTokens()
|
|
||||||
|
|
||||||
if self.__context['perform_cut']:
|
|
||||||
commandSequence += self.generateCropTokens()
|
|
||||||
|
|
||||||
commandSequence += self.generateOutputTokens(targetPath,
|
|
||||||
targetFormat)
|
|
||||||
|
|
||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence")
|
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
|
||||||
self.executeCommandSequence(commandSequence)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if videoEncoder == VideoEncoder.VP9:
|
|
||||||
|
|
||||||
commandSequence1 = (commandTokens
|
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens(only_video=True))
|
|
||||||
|
|
||||||
# Optional tokens
|
|
||||||
#NOTE: Filters and so needs to run on the first pass as well, as here
|
|
||||||
# the required bitrate for the second run is determined and recorded
|
|
||||||
# TODO: Results seems to be slightly better with first pass omitted,
|
|
||||||
# Confirm or find better filter settings for 2-pass
|
|
||||||
# commandSequence1 += self.__context['denoiser'].generatefilterTokens()
|
|
||||||
|
|
||||||
for td in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
|
|
||||||
#HINT: Attached thumbnails are not supported by .webm container format
|
|
||||||
if td.getCodec != TrackCodec.PNG:
|
|
||||||
commandSequence1 += self.generateVP9Pass1Tokens(int(quality))
|
|
||||||
|
|
||||||
if self.__context['perform_cut']:
|
|
||||||
commandSequence1 += self.generateCropTokens()
|
|
||||||
|
|
||||||
commandSequence1 += FfxController.NULL_TOKENS
|
|
||||||
|
|
||||||
if os.path.exists(FfxController.TEMP_FILE_NAME):
|
|
||||||
os.remove(FfxController.TEMP_FILE_NAME)
|
|
||||||
|
|
||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence 1")
|
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
|
||||||
self.executeCommandSequence(commandSequence1)
|
|
||||||
|
|
||||||
commandSequence2 = (commandTokens
|
|
||||||
+ self.__targetMediaDescriptor.getImportFileTokens()
|
|
||||||
+ self.__targetMediaDescriptor.getInputMappingTokens(sourceMediaDescriptor = self.__sourceMediaDescriptor)
|
|
||||||
+ self.__mdcs.generateDispositionTokens())
|
|
||||||
|
|
||||||
# Optional tokens
|
|
||||||
commandSequence2 += self.__mdcs.generateMetadataTokens()
|
|
||||||
commandSequence2 += filterTokens
|
|
||||||
|
|
||||||
for td in self.__targetMediaDescriptor.getTrackDescriptors(trackType=TrackType.VIDEO):
|
|
||||||
#HINT: Attached thumbnails are not supported by .webm container format
|
|
||||||
if td.getCodec != TrackCodec.PNG:
|
|
||||||
commandSequence2 += self.generateVP9Pass2Tokens(int(quality))
|
|
||||||
|
|
||||||
commandSequence2 += self.generateAudioEncodingTokens()
|
|
||||||
|
|
||||||
if self.__context['perform_cut']:
|
|
||||||
commandSequence2 += self.generateCropTokens()
|
|
||||||
|
|
||||||
commandSequence2 += self.generateOutputTokens(targetPath,
|
|
||||||
targetFormat)
|
|
||||||
|
|
||||||
self.__logger.debug(f"FfxController.runJob(): Running command sequence 2")
|
|
||||||
|
|
||||||
if not self.__context['dry_run']:
|
|
||||||
self.executeCommandSequence(commandSequence2)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def createEmptyFile(self,
|
|
||||||
path: str = 'empty.mkv',
|
|
||||||
sizeX: int = 1280,
|
|
||||||
sizeY: int = 720,
|
|
||||||
rate: int = 25,
|
|
||||||
length: int = 10):
|
|
||||||
|
|
||||||
commandTokens = FfxController.COMMAND_TOKENS
|
|
||||||
|
|
||||||
commandTokens += ['-f',
|
|
||||||
'lavfi',
|
|
||||||
'-i',
|
|
||||||
f"color=size={sizeX}x{sizeY}:rate={rate}:color=black",
|
|
||||||
'-f',
|
|
||||||
'lavfi',
|
|
||||||
'-i',
|
|
||||||
'anullsrc=channel_layout=stereo:sample_rate=44100',
|
|
||||||
'-t',
|
|
||||||
str(length),
|
|
||||||
path]
|
|
||||||
|
|
||||||
self.executeCommandSequence(commandTokens)
|
|
||||||
@@ -1,292 +0,0 @@
|
|||||||
import os, re, json
|
|
||||||
|
|
||||||
from .constants import (
|
|
||||||
DEFAULT_CROPDETECT_DURATION_SECONDS,
|
|
||||||
DEFAULT_CROPDETECT_SEEK_SECONDS,
|
|
||||||
FFMPEG_COMMAND_TOKENS,
|
|
||||||
FFMPEG_NULL_OUTPUT_TOKENS,
|
|
||||||
)
|
|
||||||
from .media_descriptor import MediaDescriptor
|
|
||||||
from .pattern_controller import PatternController
|
|
||||||
|
|
||||||
from ffx.filter.crop_filter import CropFilter
|
|
||||||
|
|
||||||
from .process import executeProcess
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
|
|
||||||
|
|
||||||
class FileProperties():
|
|
||||||
_cropdetect_cache: dict[tuple[str, int, int, int, int], dict[str, str]] = {}
|
|
||||||
|
|
||||||
FILE_EXTENSIONS = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
|
||||||
FFPROBE_COMMAND_TOKENS = ["ffprobe", "-hide_banner", "-show_format", "-show_streams", "-of", "json"]
|
|
||||||
|
|
||||||
SE_INDICATOR_PATTERN = '([sS][0-9]+[eE][0-9]+)'
|
|
||||||
SEASON_EPISODE_INDICATOR_MATCH = '[sS]([0-9]+)[eE]([0-9]+)'
|
|
||||||
EPISODE_INDICATOR_MATCH = '[eE]([0-9]+)'
|
|
||||||
|
|
||||||
CROPDETECT_PATTERN = 'crop=[0-9]+:[0-9]+:[0-9]+:[0-9]+$'
|
|
||||||
|
|
||||||
DEFAULT_INDEX_DIGITS = 3
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def extractSeasonEpisodeValues(cls, sourceText: str) -> tuple[int | None, int] | None:
|
|
||||||
seasonEpisodeMatch = re.search(cls.SEASON_EPISODE_INDICATOR_MATCH, str(sourceText))
|
|
||||||
if seasonEpisodeMatch is not None:
|
|
||||||
return int(seasonEpisodeMatch.group(1)), int(seasonEpisodeMatch.group(2))
|
|
||||||
|
|
||||||
episodeMatch = re.search(cls.EPISODE_INDICATOR_MATCH, str(sourceText))
|
|
||||||
if episodeMatch is not None:
|
|
||||||
return None, int(episodeMatch.group(1))
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
def __init__(self, context, sourcePath):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
|
|
||||||
self.__logger = context['logger']
|
|
||||||
|
|
||||||
# Separate basedir, basename and extension for current source file
|
|
||||||
self.__sourcePath = sourcePath
|
|
||||||
|
|
||||||
self.__sourceDirectory = os.path.dirname(self.__sourcePath)
|
|
||||||
self.__sourceFilename = os.path.basename(self.__sourcePath)
|
|
||||||
|
|
||||||
sourcePathTokens = self.__sourceFilename.split('.')
|
|
||||||
|
|
||||||
if sourcePathTokens[-1] in FileProperties.FILE_EXTENSIONS:
|
|
||||||
self.__sourceFileBasename = '.'.join(sourcePathTokens[:-1])
|
|
||||||
self.__sourceFilenameExtension = sourcePathTokens[-1]
|
|
||||||
else:
|
|
||||||
self.__sourceFileBasename = self.__sourceFilename
|
|
||||||
self.__sourceFilenameExtension = ''
|
|
||||||
|
|
||||||
self.__pc = PatternController(context)
|
|
||||||
self.__usePattern = bool(self.context.get('use_pattern', True))
|
|
||||||
|
|
||||||
# Checking if database contains matching pattern
|
|
||||||
matchResult = self.__pc.matchFilename(self.__sourceFilename) if self.__usePattern else {}
|
|
||||||
|
|
||||||
self.__logger.debug(f"FileProperties.__init__(): Match result: {matchResult}")
|
|
||||||
|
|
||||||
self.__pattern: Pattern = matchResult['pattern'] if matchResult else None
|
|
||||||
|
|
||||||
if matchResult:
|
|
||||||
databaseMatchedGroups = matchResult['match'].groups()
|
|
||||||
self.__logger.debug(f"FileProperties.__init__(): Matched groups: {databaseMatchedGroups}")
|
|
||||||
|
|
||||||
indicatorSource = databaseMatchedGroups[0]
|
|
||||||
else:
|
|
||||||
self.__logger.debug(f"FileProperties.__init__(): Checking file name for indicator {self.__sourceFilename}")
|
|
||||||
indicatorSource = self.__sourceFilename
|
|
||||||
|
|
||||||
seasonEpisodeValues = self.extractSeasonEpisodeValues(indicatorSource)
|
|
||||||
if seasonEpisodeValues is None:
|
|
||||||
self.__season = -1
|
|
||||||
self.__episode = -1
|
|
||||||
else:
|
|
||||||
sourceSeason, sourceEpisode = seasonEpisodeValues
|
|
||||||
self.__season = -1 if sourceSeason is None else int(sourceSeason)
|
|
||||||
self.__episode = int(sourceEpisode)
|
|
||||||
|
|
||||||
self.__ffprobeData = None
|
|
||||||
|
|
||||||
def _getCropdetectWindow(self):
|
|
||||||
cropdetectContext = self.context.get('cropdetect', {})
|
|
||||||
|
|
||||||
seekSeconds = int(cropdetectContext.get('seek_seconds', DEFAULT_CROPDETECT_SEEK_SECONDS))
|
|
||||||
durationSeconds = int(cropdetectContext.get('duration_seconds', DEFAULT_CROPDETECT_DURATION_SECONDS))
|
|
||||||
|
|
||||||
if seekSeconds < 0:
|
|
||||||
raise ValueError("Crop detection seek seconds must be zero or greater.")
|
|
||||||
if durationSeconds <= 0:
|
|
||||||
raise ValueError("Crop detection duration seconds must be greater than zero.")
|
|
||||||
|
|
||||||
return seekSeconds, durationSeconds
|
|
||||||
|
|
||||||
def _getCropdetectCacheKey(self):
|
|
||||||
sourceStat = os.stat(self.__sourcePath)
|
|
||||||
seekSeconds, durationSeconds = self._getCropdetectWindow()
|
|
||||||
|
|
||||||
return (
|
|
||||||
os.path.abspath(self.__sourcePath),
|
|
||||||
sourceStat.st_mtime_ns,
|
|
||||||
sourceStat.st_size,
|
|
||||||
seekSeconds,
|
|
||||||
durationSeconds,
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def _clear_cropdetect_cache(cls):
|
|
||||||
cls._cropdetect_cache.clear()
|
|
||||||
|
|
||||||
def _getFfprobeData(self):
|
|
||||||
if self.__ffprobeData is not None:
|
|
||||||
return self.__ffprobeData
|
|
||||||
|
|
||||||
ffprobeOutput, ffprobeError, returnCode = executeProcess(
|
|
||||||
FileProperties.FFPROBE_COMMAND_TOKENS + [self.__sourcePath]
|
|
||||||
)
|
|
||||||
|
|
||||||
if 'Invalid data found when processing input' in ffprobeError:
|
|
||||||
raise Exception(f"File {self.__sourcePath} does not contain valid stream data")
|
|
||||||
|
|
||||||
if returnCode != 0:
|
|
||||||
raise Exception(f"ffprobe returned with error {returnCode}")
|
|
||||||
|
|
||||||
self.__ffprobeData = json.loads(ffprobeOutput)
|
|
||||||
return self.__ffprobeData
|
|
||||||
|
|
||||||
|
|
||||||
def getFormatData(self):
|
|
||||||
"""
|
|
||||||
"format": {
|
|
||||||
"filename": "Downloads/nagatoro_s02/nagatoro_s01e02.mkv",
|
|
||||||
"nb_streams": 18,
|
|
||||||
"nb_programs": 0,
|
|
||||||
"nb_stream_groups": 0,
|
|
||||||
"format_name": "matroska,webm",
|
|
||||||
"format_long_name": "Matroska / WebM",
|
|
||||||
"start_time": "0.000000",
|
|
||||||
"duration": "1420.063000",
|
|
||||||
"size": "1489169824",
|
|
||||||
"bit_rate": "8389316",
|
|
||||||
"probe_score": 100,
|
|
||||||
"tags": {
|
|
||||||
"PUBLISHER": "Crunchyroll",
|
|
||||||
"ENCODER": "Lavf58.29.100"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
return self._getFfprobeData()['format']
|
|
||||||
|
|
||||||
|
|
||||||
def getStreamData(self):
|
|
||||||
"""Returns ffprobe stream data as array with elements according to the following example
|
|
||||||
{
|
|
||||||
"index": 4,
|
|
||||||
"codec_name": "hdmv_pgs_subtitle",
|
|
||||||
"codec_long_name": "HDMV Presentation Graphic Stream subtitles",
|
|
||||||
"codec_type": "subtitle",
|
|
||||||
"codec_tag_string": "[0][0][0][0]",
|
|
||||||
"codec_tag": "0x0000",
|
|
||||||
"r_frame_rate": "0/0",
|
|
||||||
"avg_frame_rate": "0/0",
|
|
||||||
"time_base": "1/1000",
|
|
||||||
"start_pts": 0,
|
|
||||||
"start_time": "0.000000",
|
|
||||||
"duration_ts": 1421035,
|
|
||||||
"duration": "1421.035000",
|
|
||||||
"disposition": {
|
|
||||||
"default": 1,
|
|
||||||
"dub": 0,
|
|
||||||
"original": 0,
|
|
||||||
"comment": 0,
|
|
||||||
"lyrics": 0,
|
|
||||||
"karaoke": 0,
|
|
||||||
"forced": 0,
|
|
||||||
"hearing_impaired": 0,
|
|
||||||
"visual_impaired": 0,
|
|
||||||
"clean_effects": 0,
|
|
||||||
"attached_pic": 0,
|
|
||||||
"timed_thumbnails": 0,
|
|
||||||
"non_diegetic": 0,
|
|
||||||
"captions": 0,
|
|
||||||
"descriptions": 0,
|
|
||||||
"metadata": 0,
|
|
||||||
"dependent": 0,
|
|
||||||
"still_image": 0
|
|
||||||
},
|
|
||||||
"tags": {
|
|
||||||
"language": "ger",
|
|
||||||
"title": "German Full"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
return self._getFfprobeData()['streams']
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def findCropArguments(self):
|
|
||||||
""""""
|
|
||||||
|
|
||||||
cacheKey = self._getCropdetectCacheKey()
|
|
||||||
cachedCropArguments = FileProperties._cropdetect_cache.get(cacheKey)
|
|
||||||
if cachedCropArguments is not None:
|
|
||||||
self.__logger.debug(
|
|
||||||
"FileProperties.findCropArguments(): Reusing cached cropdetect result for %s",
|
|
||||||
self.__sourcePath,
|
|
||||||
)
|
|
||||||
return dict(cachedCropArguments)
|
|
||||||
|
|
||||||
seekSeconds, durationSeconds = self._getCropdetectWindow()
|
|
||||||
|
|
||||||
cropdetectCommand = (
|
|
||||||
list(FFMPEG_COMMAND_TOKENS)
|
|
||||||
+ ["-ss", str(seekSeconds), "-i", self.__sourcePath, "-t", str(durationSeconds), "-vf", "cropdetect"]
|
|
||||||
+ list(FFMPEG_NULL_OUTPUT_TOKENS)
|
|
||||||
)
|
|
||||||
_ffmpegOutput, ffmpegError, returnCode = executeProcess(cropdetectCommand, context=self.context)
|
|
||||||
|
|
||||||
errorLines = ffmpegError.split('\n')
|
|
||||||
|
|
||||||
crops = {}
|
|
||||||
for el in errorLines:
|
|
||||||
|
|
||||||
cropdetect_match = re.search(FileProperties.CROPDETECT_PATTERN, el)
|
|
||||||
|
|
||||||
if cropdetect_match is not None:
|
|
||||||
cropParam = str(cropdetect_match.group(0))
|
|
||||||
|
|
||||||
crops[cropParam] = crops.get(cropParam, 0) + 1
|
|
||||||
|
|
||||||
if crops:
|
|
||||||
cropString = max(crops.items(), key=lambda item: (item[1], item[0]))[0]
|
|
||||||
|
|
||||||
cropTokens = cropString.split('=')
|
|
||||||
cropValueTokens = cropTokens[1]
|
|
||||||
cropValues = cropValueTokens.split(':')
|
|
||||||
|
|
||||||
cropArguments = {
|
|
||||||
CropFilter.OUTPUT_WIDTH_KEY: cropValues[0],
|
|
||||||
CropFilter.OUTPUT_HEIGHT_KEY: cropValues[1],
|
|
||||||
CropFilter.OFFSET_X_KEY: cropValues[2],
|
|
||||||
CropFilter.OFFSET_Y_KEY: cropValues[3]
|
|
||||||
}
|
|
||||||
FileProperties._cropdetect_cache[cacheKey] = dict(cropArguments)
|
|
||||||
return cropArguments
|
|
||||||
|
|
||||||
if returnCode != 0:
|
|
||||||
raise Exception(f"ffmpeg cropdetect returned with error {returnCode}")
|
|
||||||
|
|
||||||
FileProperties._cropdetect_cache[cacheKey] = {}
|
|
||||||
return {}
|
|
||||||
|
|
||||||
|
|
||||||
def getMediaDescriptor(self):
|
|
||||||
return MediaDescriptor.fromFfprobe(self.context, self.getFormatData(), self.getStreamData())
|
|
||||||
|
|
||||||
|
|
||||||
def getShowId(self) -> int:
|
|
||||||
"""Result is -1 if the filename did not match anything in database"""
|
|
||||||
return self.__pattern.getShowId() if self.__pattern is not None else -1
|
|
||||||
|
|
||||||
def getPattern(self) -> Pattern:
|
|
||||||
"""Result is None if the filename did not match anything in database"""
|
|
||||||
return self.__pattern
|
|
||||||
|
|
||||||
|
|
||||||
def getSeason(self) -> int:
|
|
||||||
return int(self.__season)
|
|
||||||
|
|
||||||
def getEpisode(self) -> int:
|
|
||||||
return int(self.__episode)
|
|
||||||
|
|
||||||
|
|
||||||
def getFilename(self):
|
|
||||||
return self.__sourceFilename
|
|
||||||
|
|
||||||
def getFileBasename(self):
|
|
||||||
return self.__sourceFileBasename
|
|
||||||
@@ -1,51 +0,0 @@
|
|||||||
import itertools
|
|
||||||
|
|
||||||
from .filter import Filter
|
|
||||||
|
|
||||||
|
|
||||||
class CropFilter(Filter):
|
|
||||||
|
|
||||||
IDENTIFIER = 'crop'
|
|
||||||
|
|
||||||
OUTPUT_WIDTH_KEY = 'output_width'
|
|
||||||
OUTPUT_HEIGHT_KEY = 'output_height'
|
|
||||||
OFFSET_X_KEY = 'x_offset'
|
|
||||||
OFFSET_Y_KEY = 'y_offset'
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
self.__outputWidth = int(kwargs.get(CropFilter.OUTPUT_WIDTH_KEY, 0))
|
|
||||||
self.__outputHeight = int(kwargs.get(CropFilter.OUTPUT_HEIGHT_KEY, 0))
|
|
||||||
self.__offsetX = int(kwargs.get(CropFilter.OFFSET_X_KEY, 0))
|
|
||||||
self.__offsetY = int(kwargs.get(CropFilter.OFFSET_Y_KEY, 0))
|
|
||||||
|
|
||||||
super().__init__(self)
|
|
||||||
|
|
||||||
def setArguments(self, **kwargs):
|
|
||||||
self.__outputWidth = int(kwargs.get(CropFilter.OUTPUT_WIDTH_KEY))
|
|
||||||
self.__outputHeight = int(kwargs.get(CropFilter.OUTPUT_HEIGHT_KEY))
|
|
||||||
self.__offsetX = int(kwargs.get(CropFilter.OFFSET_X_KEY,))
|
|
||||||
self.__offsetY = int(kwargs.get(CropFilter.OFFSET_Y_KEY,))
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
payload = {'identifier': CropFilter.IDENTIFIER,
|
|
||||||
'parameters': {
|
|
||||||
CropFilter.OUTPUT_WIDTH_KEY: self.__outputWidth,
|
|
||||||
CropFilter.OUTPUT_HEIGHT_KEY: self.__outputHeight,
|
|
||||||
CropFilter.OFFSET_X_KEY: self.__offsetX,
|
|
||||||
CropFilter.OFFSET_Y_KEY: self.__offsetY
|
|
||||||
},
|
|
||||||
'suffices': [],
|
|
||||||
'variant': f"C{self.__outputWidth}-{self.__outputHeight}-{self.__offsetX}-{self.__offsetY}",
|
|
||||||
'tokens': ['crop='
|
|
||||||
+ f"{self.__outputWidth}"
|
|
||||||
+ f":{self.__outputHeight}"
|
|
||||||
+ f":{self.__offsetX}"
|
|
||||||
+ f":{self.__offsetY}"]}
|
|
||||||
|
|
||||||
return payload
|
|
||||||
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
yield self.getPayload()
|
|
||||||
@@ -1,140 +0,0 @@
|
|||||||
import itertools
|
|
||||||
|
|
||||||
from .filter import Filter
|
|
||||||
|
|
||||||
|
|
||||||
class DeinterlaceFilter(Filter):
|
|
||||||
|
|
||||||
IDENTIFIER = 'bwdif'
|
|
||||||
|
|
||||||
# DEFAULT_STRENGTH: float = 2.8
|
|
||||||
# DEFAULT_PATCH_SIZE: int = 13
|
|
||||||
# DEFAULT_CHROMA_PATCH_SIZE: int = 9
|
|
||||||
# DEFAULT_RESEARCH_WINDOW: int = 23
|
|
||||||
# DEFAULT_CHROMA_RESEARCH_WINDOW: int= 17
|
|
||||||
|
|
||||||
# STRENGTH_KEY = 'strength'
|
|
||||||
# PATCH_SIZE_KEY = 'patch_size'
|
|
||||||
# CHROMA_PATCH_SIZE_KEY = 'chroma_patch_size'
|
|
||||||
# RESEARCH_WINDOW_KEY = 'research_window'
|
|
||||||
# CHROMA_RESEARCH_WINDOW_KEY = 'chroma_research_window'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
# self.__useHardware = kwargs.get('use_hardware', False)
|
|
||||||
|
|
||||||
# self.__strengthList = []
|
|
||||||
# strength = kwargs.get(NlmeansFilter.STRENGTH_KEY, '')
|
|
||||||
# if strength:
|
|
||||||
# strengthTokens = strength.split(',')
|
|
||||||
# for st in strengthTokens:
|
|
||||||
# try:
|
|
||||||
# strengthValue = float(st)
|
|
||||||
# except:
|
|
||||||
# raise ValueError('NlmeansFilter: Strength value has to be of type float')
|
|
||||||
# if strengthValue < 1.0 or strengthValue > 30.0:
|
|
||||||
# raise ValueError('NlmeansFilter: Strength value has to be between 1.0 and 30.0')
|
|
||||||
# self.__strengthList.append(strengthValue)
|
|
||||||
# else:
|
|
||||||
# self.__strengthList = [NlmeansFilter.DEFAULT_STRENGTH]
|
|
||||||
|
|
||||||
# self.__patchSizeList = []
|
|
||||||
# patchSize = kwargs.get(NlmeansFilter.PATCH_SIZE_KEY, '')
|
|
||||||
# if patchSize:
|
|
||||||
# patchSizeTokens = patchSize.split(',')
|
|
||||||
# for pst in patchSizeTokens:
|
|
||||||
# try:
|
|
||||||
# patchSizeValue = int(pst)
|
|
||||||
# except:
|
|
||||||
# raise ValueError('NlmeansFilter: Patch size value has to be of type int')
|
|
||||||
# if patchSizeValue < 0 or patchSizeValue > 99:
|
|
||||||
# raise ValueError('NlmeansFilter: Patch size value has to be between 0 and 99')
|
|
||||||
# if patchSizeValue % 2 == 0:
|
|
||||||
# raise ValueError('NlmeansFilter: Patch size value has to an odd number')
|
|
||||||
# self.__patchSizeList.append(patchSizeValue)
|
|
||||||
# else:
|
|
||||||
# self.__patchSizeList = [NlmeansFilter.DEFAULT_PATCH_SIZE]
|
|
||||||
|
|
||||||
# self.__chromaPatchSizeList = []
|
|
||||||
# chromaPatchSize = kwargs.get(NlmeansFilter.CHROMA_PATCH_SIZE_KEY, '')
|
|
||||||
# if chromaPatchSize:
|
|
||||||
# chromaPatchSizeTokens = chromaPatchSize.split(',')
|
|
||||||
# for cpst in chromaPatchSizeTokens:
|
|
||||||
# try:
|
|
||||||
# chromaPatchSizeValue = int(pst)
|
|
||||||
# except:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma patch size value has to be of type int')
|
|
||||||
# if chromaPatchSizeValue < 0 or chromaPatchSizeValue > 99:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma patch value has to be between 0 and 99')
|
|
||||||
# if chromaPatchSizeValue % 2 == 0:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma patch value has to an odd number')
|
|
||||||
# self.__chromaPatchSizeList.append(chromaPatchSizeValue)
|
|
||||||
# else:
|
|
||||||
# self.__chromaPatchSizeList = [NlmeansFilter.DEFAULT_CHROMA_PATCH_SIZE]
|
|
||||||
|
|
||||||
# self.__researchWindowList = []
|
|
||||||
# researchWindow = kwargs.get(NlmeansFilter.RESEARCH_WINDOW_KEY, '')
|
|
||||||
# if researchWindow:
|
|
||||||
# researchWindowTokens = researchWindow.split(',')
|
|
||||||
# for rwt in researchWindowTokens:
|
|
||||||
# try:
|
|
||||||
# researchWindowValue = int(rwt)
|
|
||||||
# except:
|
|
||||||
# raise ValueError('NlmeansFilter: Research window value has to be of type int')
|
|
||||||
# if researchWindowValue < 0 or researchWindowValue > 99:
|
|
||||||
# raise ValueError('NlmeansFilter: Research window value has to be between 0 and 99')
|
|
||||||
# if researchWindowValue % 2 == 0:
|
|
||||||
# raise ValueError('NlmeansFilter: Research window value has to an odd number')
|
|
||||||
# self.__researchWindowList.append(researchWindowValue)
|
|
||||||
# else:
|
|
||||||
# self.__researchWindowList = [NlmeansFilter.DEFAULT_RESEARCH_WINDOW]
|
|
||||||
|
|
||||||
# self.__chromaResearchWindowList = []
|
|
||||||
# chromaResearchWindow = kwargs.get(NlmeansFilter.CHROMA_RESEARCH_WINDOW_KEY, '')
|
|
||||||
# if chromaResearchWindow:
|
|
||||||
# chromaResearchWindowTokens = chromaResearchWindow.split(',')
|
|
||||||
# for crwt in chromaResearchWindowTokens:
|
|
||||||
# try:
|
|
||||||
# chromaResearchWindowValue = int(crwt)
|
|
||||||
# except:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma research window value has to be of type int')
|
|
||||||
# if chromaResearchWindowValue < 0 or chromaResearchWindowValue > 99:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma research window value has to be between 0 and 99')
|
|
||||||
# if chromaResearchWindowValue % 2 == 0:
|
|
||||||
# raise ValueError('NlmeansFilter: Chroma research window value has to an odd number')
|
|
||||||
# self.__chromaResearchWindowList.append(chromaResearchWindowValue)
|
|
||||||
# else:
|
|
||||||
# self.__chromaResearchWindowList = [NlmeansFilter.DEFAULT_CHROMA_RESEARCH_WINDOW]
|
|
||||||
|
|
||||||
super().__init__(self)
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
# strength = iteration[0]
|
|
||||||
# patchSize = iteration[1]
|
|
||||||
# chromaPatchSize = iteration[2]
|
|
||||||
# researchWindow = iteration[3]
|
|
||||||
# chromaResearchWindow = iteration[4]
|
|
||||||
|
|
||||||
suffices = []
|
|
||||||
|
|
||||||
# filterName = 'nlmeans_opencl' if self.__useHardware else 'nlmeans'
|
|
||||||
|
|
||||||
payload = {'identifier': DeinterlaceFilter.IDENTIFIER,
|
|
||||||
'parameters': {},
|
|
||||||
'suffices': suffices,
|
|
||||||
'variant': f"DEINT",
|
|
||||||
'tokens': ['bwdif=mode=1']}
|
|
||||||
|
|
||||||
return payload
|
|
||||||
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
# for it in itertools.product(self.__strengthList,
|
|
||||||
# self.__patchSizeList,
|
|
||||||
# self.__chromaPatchSizeList,
|
|
||||||
# self.__researchWindowList,
|
|
||||||
# self.__chromaResearchWindowList):
|
|
||||||
yield self.getPayload()
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
import itertools
|
|
||||||
|
|
||||||
|
|
||||||
class Filter():
|
|
||||||
|
|
||||||
filterChain: list = []
|
|
||||||
|
|
||||||
def __init__(self, filter):
|
|
||||||
|
|
||||||
self.filterChain.append(filter)
|
|
||||||
|
|
||||||
def getFilterChain(self):
|
|
||||||
return self.filterChain
|
|
||||||
|
|
||||||
def getChainYield(self):
|
|
||||||
for fy in itertools.product(*[f.getYield() for f in self.filterChain]):
|
|
||||||
yield fy
|
|
||||||
@@ -1,162 +0,0 @@
|
|||||||
import itertools
|
|
||||||
|
|
||||||
from .filter import Filter
|
|
||||||
|
|
||||||
|
|
||||||
class NlmeansFilter(Filter):
|
|
||||||
|
|
||||||
IDENTIFIER = 'nlmeans'
|
|
||||||
|
|
||||||
DEFAULT_STRENGTH: float = 2.8
|
|
||||||
DEFAULT_PATCH_SIZE: int = 13
|
|
||||||
DEFAULT_CHROMA_PATCH_SIZE: int = 9
|
|
||||||
DEFAULT_RESEARCH_WINDOW: int = 23
|
|
||||||
DEFAULT_CHROMA_RESEARCH_WINDOW: int= 17
|
|
||||||
|
|
||||||
STRENGTH_KEY = 'strength'
|
|
||||||
PATCH_SIZE_KEY = 'patch_size'
|
|
||||||
CHROMA_PATCH_SIZE_KEY = 'chroma_patch_size'
|
|
||||||
RESEARCH_WINDOW_KEY = 'research_window'
|
|
||||||
CHROMA_RESEARCH_WINDOW_KEY = 'chroma_research_window'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
self.__useHardware = kwargs.get('use_hardware', False)
|
|
||||||
|
|
||||||
self.__strengthList = []
|
|
||||||
strength = kwargs.get(NlmeansFilter.STRENGTH_KEY, '')
|
|
||||||
if strength:
|
|
||||||
strengthTokens = strength.split(',')
|
|
||||||
for st in strengthTokens:
|
|
||||||
try:
|
|
||||||
strengthValue = float(st)
|
|
||||||
except:
|
|
||||||
raise ValueError('NlmeansFilter: Strength value has to be of type float')
|
|
||||||
if strengthValue < 1.0 or strengthValue > 30.0:
|
|
||||||
raise ValueError('NlmeansFilter: Strength value has to be between 1.0 and 30.0')
|
|
||||||
self.__strengthList.append(strengthValue)
|
|
||||||
else:
|
|
||||||
self.__strengthList = [NlmeansFilter.DEFAULT_STRENGTH]
|
|
||||||
|
|
||||||
self.__patchSizeList = []
|
|
||||||
patchSize = kwargs.get(NlmeansFilter.PATCH_SIZE_KEY, '')
|
|
||||||
if patchSize:
|
|
||||||
patchSizeTokens = patchSize.split(',')
|
|
||||||
for pst in patchSizeTokens:
|
|
||||||
try:
|
|
||||||
patchSizeValue = int(pst)
|
|
||||||
except:
|
|
||||||
raise ValueError('NlmeansFilter: Patch size value has to be of type int')
|
|
||||||
if patchSizeValue < 0 or patchSizeValue > 99:
|
|
||||||
raise ValueError('NlmeansFilter: Patch size value has to be between 0 and 99')
|
|
||||||
if patchSizeValue % 2 == 0:
|
|
||||||
raise ValueError('NlmeansFilter: Patch size value has to an odd number')
|
|
||||||
self.__patchSizeList.append(patchSizeValue)
|
|
||||||
else:
|
|
||||||
self.__patchSizeList = [NlmeansFilter.DEFAULT_PATCH_SIZE]
|
|
||||||
|
|
||||||
self.__chromaPatchSizeList = []
|
|
||||||
chromaPatchSize = kwargs.get(NlmeansFilter.CHROMA_PATCH_SIZE_KEY, '')
|
|
||||||
if chromaPatchSize:
|
|
||||||
chromaPatchSizeTokens = chromaPatchSize.split(',')
|
|
||||||
for cpst in chromaPatchSizeTokens:
|
|
||||||
try:
|
|
||||||
chromaPatchSizeValue = int(pst)
|
|
||||||
except:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma patch size value has to be of type int')
|
|
||||||
if chromaPatchSizeValue < 0 or chromaPatchSizeValue > 99:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma patch value has to be between 0 and 99')
|
|
||||||
if chromaPatchSizeValue % 2 == 0:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma patch value has to an odd number')
|
|
||||||
self.__chromaPatchSizeList.append(chromaPatchSizeValue)
|
|
||||||
else:
|
|
||||||
self.__chromaPatchSizeList = [NlmeansFilter.DEFAULT_CHROMA_PATCH_SIZE]
|
|
||||||
|
|
||||||
self.__researchWindowList = []
|
|
||||||
researchWindow = kwargs.get(NlmeansFilter.RESEARCH_WINDOW_KEY, '')
|
|
||||||
if researchWindow:
|
|
||||||
researchWindowTokens = researchWindow.split(',')
|
|
||||||
for rwt in researchWindowTokens:
|
|
||||||
try:
|
|
||||||
researchWindowValue = int(rwt)
|
|
||||||
except:
|
|
||||||
raise ValueError('NlmeansFilter: Research window value has to be of type int')
|
|
||||||
if researchWindowValue < 0 or researchWindowValue > 99:
|
|
||||||
raise ValueError('NlmeansFilter: Research window value has to be between 0 and 99')
|
|
||||||
if researchWindowValue % 2 == 0:
|
|
||||||
raise ValueError('NlmeansFilter: Research window value has to an odd number')
|
|
||||||
self.__researchWindowList.append(researchWindowValue)
|
|
||||||
else:
|
|
||||||
self.__researchWindowList = [NlmeansFilter.DEFAULT_RESEARCH_WINDOW]
|
|
||||||
|
|
||||||
self.__chromaResearchWindowList = []
|
|
||||||
chromaResearchWindow = kwargs.get(NlmeansFilter.CHROMA_RESEARCH_WINDOW_KEY, '')
|
|
||||||
if chromaResearchWindow:
|
|
||||||
chromaResearchWindowTokens = chromaResearchWindow.split(',')
|
|
||||||
for crwt in chromaResearchWindowTokens:
|
|
||||||
try:
|
|
||||||
chromaResearchWindowValue = int(crwt)
|
|
||||||
except:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma research window value has to be of type int')
|
|
||||||
if chromaResearchWindowValue < 0 or chromaResearchWindowValue > 99:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma research window value has to be between 0 and 99')
|
|
||||||
if chromaResearchWindowValue % 2 == 0:
|
|
||||||
raise ValueError('NlmeansFilter: Chroma research window value has to an odd number')
|
|
||||||
self.__chromaResearchWindowList.append(chromaResearchWindowValue)
|
|
||||||
else:
|
|
||||||
self.__chromaResearchWindowList = [NlmeansFilter.DEFAULT_CHROMA_RESEARCH_WINDOW]
|
|
||||||
|
|
||||||
super().__init__(self)
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self, iteration):
|
|
||||||
|
|
||||||
strength = iteration[0]
|
|
||||||
patchSize = iteration[1]
|
|
||||||
chromaPatchSize = iteration[2]
|
|
||||||
researchWindow = iteration[3]
|
|
||||||
chromaResearchWindow = iteration[4]
|
|
||||||
|
|
||||||
suffices = []
|
|
||||||
|
|
||||||
if len(self.__strengthList) > 1:
|
|
||||||
suffices += [f"ds{strength}"]
|
|
||||||
if len(self.__patchSizeList) > 1:
|
|
||||||
suffices += [f"dp{patchSize}"]
|
|
||||||
if len(self.__chromaPatchSizeList) > 1:
|
|
||||||
suffices += [f"dpc{chromaPatchSize}"]
|
|
||||||
if len(self.__researchWindowList) > 1:
|
|
||||||
suffices += [f"dr{researchWindow}"]
|
|
||||||
if len(self.__chromaResearchWindowList) > 1:
|
|
||||||
suffices += [f"drc{chromaResearchWindow}"]
|
|
||||||
|
|
||||||
filterName = 'nlmeans_opencl' if self.__useHardware else 'nlmeans'
|
|
||||||
|
|
||||||
payload = {'identifier': NlmeansFilter.IDENTIFIER,
|
|
||||||
'parameters': {
|
|
||||||
'strength': strength,
|
|
||||||
'patch_size': patchSize,
|
|
||||||
'chroma_patch_size': chromaPatchSize,
|
|
||||||
'research_window': researchWindow,
|
|
||||||
'chroma_research_window': chromaResearchWindow
|
|
||||||
},
|
|
||||||
'suffices': suffices,
|
|
||||||
'variant': f"DS{strength}-DP{patchSize}-DPC{chromaPatchSize}"
|
|
||||||
+ f"-DR{researchWindow}-DRC{chromaResearchWindow}",
|
|
||||||
'tokens': [f"{filterName}=s={strength}"
|
|
||||||
+ f":p={patchSize}"
|
|
||||||
+ f":pc={chromaPatchSize}"
|
|
||||||
+ f":r={researchWindow}"
|
|
||||||
+ f":rc={chromaResearchWindow}"]}
|
|
||||||
|
|
||||||
return payload
|
|
||||||
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
for it in itertools.product(self.__strengthList,
|
|
||||||
self.__patchSizeList,
|
|
||||||
self.__chromaPatchSizeList,
|
|
||||||
self.__researchWindowList,
|
|
||||||
self.__chromaResearchWindowList):
|
|
||||||
yield self.getPayload(it)
|
|
||||||
@@ -1,54 +0,0 @@
|
|||||||
import itertools
|
|
||||||
|
|
||||||
from .filter import Filter
|
|
||||||
|
|
||||||
|
|
||||||
class PresetFilter(Filter):
|
|
||||||
|
|
||||||
IDENTIFIER = 'preset'
|
|
||||||
|
|
||||||
DEFAULT_PRESET = 5
|
|
||||||
|
|
||||||
PRESET_KEY = 'preset'
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
self.__presetsList = []
|
|
||||||
presets = str(kwargs.get(PresetFilter.PRESET_KEY, ''))
|
|
||||||
if presets:
|
|
||||||
presetTokens = presets.split(',')
|
|
||||||
for q in presetTokens:
|
|
||||||
try:
|
|
||||||
presetValue = int(q)
|
|
||||||
except:
|
|
||||||
raise ValueError('PresetFilter: Preset value has to be of type int')
|
|
||||||
if presetValue < 0 or presetValue > 13:
|
|
||||||
raise ValueError('PresetFilter: Preset value has to be between 0 and 13')
|
|
||||||
self.__presetsList.append(presetValue)
|
|
||||||
else:
|
|
||||||
self.__presetsList = [PresetFilter.DEFAULT_PRESET]
|
|
||||||
|
|
||||||
super().__init__(self)
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self, preset):
|
|
||||||
|
|
||||||
suffices = []
|
|
||||||
|
|
||||||
if len(self.__presetsList) > 1:
|
|
||||||
suffices += [f"p{preset}"]
|
|
||||||
|
|
||||||
payload = {'identifier': PresetFilter.IDENTIFIER,
|
|
||||||
'parameters': {
|
|
||||||
'preset': preset
|
|
||||||
},
|
|
||||||
'suffices': suffices,
|
|
||||||
'variant': f"P{preset}",
|
|
||||||
'tokens': []}
|
|
||||||
|
|
||||||
return payload
|
|
||||||
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
for q in self.__presetsList:
|
|
||||||
yield self.getPayload(q)
|
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from .filter import Filter
|
|
||||||
|
|
||||||
from ffx.video_encoder import VideoEncoder
|
|
||||||
|
|
||||||
|
|
||||||
class QualityFilter(Filter):
|
|
||||||
|
|
||||||
IDENTIFIER = 'quality'
|
|
||||||
|
|
||||||
DEFAULT_VP9_QUALITY = 32
|
|
||||||
DEFAULT_H264_QUALITY = 17
|
|
||||||
|
|
||||||
QUALITY_KEY = 'quality'
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
context = click.get_current_context().obj
|
|
||||||
|
|
||||||
|
|
||||||
self.__qualitiesList = []
|
|
||||||
qualities = kwargs.get(QualityFilter.QUALITY_KEY, '')
|
|
||||||
if qualities:
|
|
||||||
qualityTokens = qualities.split(',')
|
|
||||||
for q in qualityTokens:
|
|
||||||
try:
|
|
||||||
qualityValue = int(q)
|
|
||||||
except:
|
|
||||||
raise ValueError('QualityFilter: Quality value has to be of type int')
|
|
||||||
if qualityValue < 0 or qualityValue > 63:
|
|
||||||
raise ValueError('QualityFilter: Quality value has to be between 0 and 63')
|
|
||||||
self.__qualitiesList.append(qualityValue)
|
|
||||||
else:
|
|
||||||
|
|
||||||
self.__qualitiesList = [None]
|
|
||||||
|
|
||||||
|
|
||||||
super().__init__(self)
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self, quality):
|
|
||||||
|
|
||||||
suffices = []
|
|
||||||
|
|
||||||
if len(self.__qualitiesList) > 1:
|
|
||||||
suffices += [f"q{quality}"]
|
|
||||||
|
|
||||||
payload = {'identifier': QualityFilter.IDENTIFIER,
|
|
||||||
'parameters': {
|
|
||||||
'quality': quality
|
|
||||||
},
|
|
||||||
'suffices': suffices,
|
|
||||||
'variant': f"Q{quality}",
|
|
||||||
'tokens': []}
|
|
||||||
|
|
||||||
return payload
|
|
||||||
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
for q in self.__qualitiesList:
|
|
||||||
yield self.getPayload(q)
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
from .filter import Filter
|
|
||||||
|
|
||||||
class ScaleFilter(Filter):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__(self)
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
from textual.app import ComposeResult
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Footer, Placeholder
|
|
||||||
|
|
||||||
class HelpScreen(Screen):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
context = self.app.getContext()
|
|
||||||
|
|
||||||
def compose(self) -> ComposeResult:
|
|
||||||
yield Placeholder("Help Screen")
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
@@ -1,259 +0,0 @@
|
|||||||
import re
|
|
||||||
|
|
||||||
from jinja2 import Environment, Undefined
|
|
||||||
from .constants import DEFAULT_OUTPUT_FILENAME_TEMPLATE
|
|
||||||
from .configuration_controller import ConfigurationController
|
|
||||||
from .logging_utils import get_ffx_logger
|
|
||||||
from .show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
class EmptyStringUndefined(Undefined):
|
|
||||||
def __str__(self):
|
|
||||||
return ''
|
|
||||||
|
|
||||||
|
|
||||||
DIFF_ADDED_KEY = 'added'
|
|
||||||
DIFF_REMOVED_KEY = 'removed'
|
|
||||||
DIFF_CHANGED_KEY = 'changed'
|
|
||||||
DIFF_UNCHANGED_KEY = 'unchanged'
|
|
||||||
|
|
||||||
FILENAME_FILTER_TRANSLATION = str.maketrans(
|
|
||||||
{
|
|
||||||
"/": "-",
|
|
||||||
":": ";",
|
|
||||||
"*": "",
|
|
||||||
"'": "",
|
|
||||||
"?": "#",
|
|
||||||
"♥": "",
|
|
||||||
"’": "",
|
|
||||||
}
|
|
||||||
)
|
|
||||||
TMDB_FILLER_MARKERS = (" (*)", "(*)")
|
|
||||||
TMDB_EPISODE_RANGE_SUFFIX_REGEX = re.compile(r"\(([0-9]+)[-/]([0-9]+)\)$")
|
|
||||||
TMDB_EPISODE_PART_SUFFIX_REGEX = re.compile(r"\(([0-9]+)\)$")
|
|
||||||
RICH_COLOR_REGEX = re.compile(r"\[[a-z_]+\](.+)\[/[a-z_]+\]")
|
|
||||||
|
|
||||||
|
|
||||||
def dictDiff(a : dict, b : dict, ignoreKeys: list = [], removeKeys: list = []):
|
|
||||||
"""
|
|
||||||
ignoreKeys: Ignored keys are filtered from calculating diff at all
|
|
||||||
removeKeys: Override diff calculation to remove keys certainly
|
|
||||||
"""
|
|
||||||
|
|
||||||
a_filtered = {k:v for k,v in a.items() if not k in ignoreKeys}
|
|
||||||
b_filtered = {k:v for k,v in b.items() if not k in ignoreKeys and k not in removeKeys}
|
|
||||||
|
|
||||||
a_only = {k:v for k,v in a_filtered.items() if not k in b_filtered.keys()}
|
|
||||||
b_only = {k:v for k,v in b_filtered.items() if not k in a_filtered.keys()}
|
|
||||||
|
|
||||||
a_b = set(a_filtered.keys()) & set(b_filtered.keys())
|
|
||||||
|
|
||||||
changed = {k:b_filtered[k] for k in a_b if a_filtered[k] != b_filtered[k]}
|
|
||||||
unchanged = {k:b_filtered[k] for k in a_b if a_filtered[k] == b_filtered[k]}
|
|
||||||
|
|
||||||
diffResult = {}
|
|
||||||
|
|
||||||
|
|
||||||
if a_only:
|
|
||||||
diffResult[DIFF_REMOVED_KEY] = a_only
|
|
||||||
diffResult[DIFF_UNCHANGED_KEY] = unchanged
|
|
||||||
if b_only:
|
|
||||||
diffResult[DIFF_ADDED_KEY] = b_only
|
|
||||||
if changed:
|
|
||||||
diffResult[DIFF_CHANGED_KEY] = changed
|
|
||||||
|
|
||||||
return diffResult
|
|
||||||
|
|
||||||
|
|
||||||
def dictKeysDiff(a : dict, b : dict):
|
|
||||||
|
|
||||||
a_keys = set(a.keys())
|
|
||||||
b_keys = set(b.keys())
|
|
||||||
|
|
||||||
a_only = a_keys - b_keys
|
|
||||||
b_only = b_keys - a_keys
|
|
||||||
a_b = a_keys & b_keys
|
|
||||||
|
|
||||||
changed = {k for k in a_b if a[k] != b[k]}
|
|
||||||
|
|
||||||
diffResult = {}
|
|
||||||
|
|
||||||
|
|
||||||
if a_only:
|
|
||||||
diffResult[DIFF_REMOVED_KEY] = a_only
|
|
||||||
diffResult[DIFF_UNCHANGED_KEY] = b_keys
|
|
||||||
if b_only:
|
|
||||||
diffResult[DIFF_ADDED_KEY] = b_only
|
|
||||||
if changed:
|
|
||||||
diffResult[DIFF_CHANGED_KEY] = changed
|
|
||||||
|
|
||||||
return diffResult
|
|
||||||
|
|
||||||
|
|
||||||
def dictCache(element: dict, cache: list = []):
|
|
||||||
for index in range(len(cache)):
|
|
||||||
diff = dictKeysDiff(cache[index], element)
|
|
||||||
if not diff:
|
|
||||||
return index, cache
|
|
||||||
cache.append(element)
|
|
||||||
return -1, cache
|
|
||||||
|
|
||||||
|
|
||||||
def setDiff(a : set, b : set) -> set:
|
|
||||||
|
|
||||||
a_only = a - b
|
|
||||||
b_only = b - a
|
|
||||||
a_and_b = a & b
|
|
||||||
|
|
||||||
diffResult = {}
|
|
||||||
|
|
||||||
if a_only:
|
|
||||||
diffResult[DIFF_REMOVED_KEY] = a_only
|
|
||||||
diffResult[DIFF_UNCHANGED_KEY] = a_and_b
|
|
||||||
if b_only:
|
|
||||||
diffResult[DIFF_ADDED_KEY] = b_only
|
|
||||||
|
|
||||||
return diffResult
|
|
||||||
|
|
||||||
|
|
||||||
def permutateList(inputList: list, permutation: list):
|
|
||||||
|
|
||||||
# 0,1,2: ABC
|
|
||||||
# 0,2,1: ACB
|
|
||||||
# 1,2,0: BCA
|
|
||||||
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def filterFilename(fileName: str) -> str:
|
|
||||||
"""This filter replaces charactes from TMDB responses with characters
|
|
||||||
less problemating when using in filenames or removes them"""
|
|
||||||
|
|
||||||
return str(fileName).translate(FILENAME_FILTER_TRANSLATION).strip()
|
|
||||||
|
|
||||||
def substituteTmdbFilename(fileName: str) -> str:
|
|
||||||
"""If chaining this method with filterFilename use this one first as the latter will destroy some patterns"""
|
|
||||||
|
|
||||||
normalizedFileName = str(fileName)
|
|
||||||
|
|
||||||
for fillerMarker in TMDB_FILLER_MARKERS:
|
|
||||||
normalizedFileName = normalizedFileName.replace(fillerMarker, '')
|
|
||||||
|
|
||||||
episodeRangeMatch = TMDB_EPISODE_RANGE_SUFFIX_REGEX.search(normalizedFileName)
|
|
||||||
if episodeRangeMatch is not None:
|
|
||||||
partFirstIndex, partLastIndex = episodeRangeMatch.groups()
|
|
||||||
return TMDB_EPISODE_RANGE_SUFFIX_REGEX.sub(
|
|
||||||
f"Teil {partFirstIndex}-{partLastIndex}",
|
|
||||||
normalizedFileName,
|
|
||||||
count=1,
|
|
||||||
)
|
|
||||||
|
|
||||||
episodePartMatch = TMDB_EPISODE_PART_SUFFIX_REGEX.search(normalizedFileName)
|
|
||||||
if episodePartMatch is not None:
|
|
||||||
partIndex = episodePartMatch.group(1)
|
|
||||||
return TMDB_EPISODE_PART_SUFFIX_REGEX.sub(
|
|
||||||
f"Teil {partIndex}",
|
|
||||||
normalizedFileName,
|
|
||||||
count=1,
|
|
||||||
)
|
|
||||||
|
|
||||||
return normalizedFileName
|
|
||||||
|
|
||||||
|
|
||||||
def getEpisodeFileBasename(showName,
|
|
||||||
episodeName,
|
|
||||||
season,
|
|
||||||
episode,
|
|
||||||
indexSeasonDigits = None,
|
|
||||||
indexEpisodeDigits = None,
|
|
||||||
indicatorSeasonDigits = None,
|
|
||||||
indicatorEpisodeDigits = None,
|
|
||||||
context = None):
|
|
||||||
"""
|
|
||||||
One Piece:
|
|
||||||
indexSeasonDigits = 0,
|
|
||||||
indexEpisodeDigits = 4,
|
|
||||||
indicatorSeasonDigits = 2,
|
|
||||||
indicatorEpisodeDigits = 4
|
|
||||||
|
|
||||||
Three-Body:
|
|
||||||
indexSeasonDigits = 0,
|
|
||||||
indexEpisodeDigits = 2,
|
|
||||||
indicatorSeasonDigits = 2,
|
|
||||||
indicatorEpisodeDigits = 2
|
|
||||||
|
|
||||||
Dragonball:
|
|
||||||
indexSeasonDigits = 0,
|
|
||||||
indexEpisodeDigits = 3,
|
|
||||||
indicatorSeasonDigits = 2,
|
|
||||||
indicatorEpisodeDigits = 3
|
|
||||||
|
|
||||||
Boruto:
|
|
||||||
indexSeasonDigits = 0,
|
|
||||||
indexEpisodeDigits = 4,
|
|
||||||
indicatorSeasonDigits = 2,
|
|
||||||
indicatorEpisodeDigits = 4
|
|
||||||
"""
|
|
||||||
|
|
||||||
cc: ConfigurationController = context['config'] if context is not None and 'config' in context.keys() else None
|
|
||||||
configData = cc.getData() if cc is not None else {}
|
|
||||||
outputFilenameTemplate = configData.get(ConfigurationController.OUTPUT_FILENAME_TEMPLATE_KEY,
|
|
||||||
DEFAULT_OUTPUT_FILENAME_TEMPLATE)
|
|
||||||
defaultDigitLengths = ShowDescriptor.getDefaultDigitLengths(context)
|
|
||||||
|
|
||||||
if indexSeasonDigits is None:
|
|
||||||
indexSeasonDigits = defaultDigitLengths[ShowDescriptor.INDEX_SEASON_DIGITS_KEY]
|
|
||||||
if indexEpisodeDigits is None:
|
|
||||||
indexEpisodeDigits = defaultDigitLengths[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY]
|
|
||||||
if indicatorSeasonDigits is None:
|
|
||||||
indicatorSeasonDigits = defaultDigitLengths[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY]
|
|
||||||
if indicatorEpisodeDigits is None:
|
|
||||||
indicatorEpisodeDigits = defaultDigitLengths[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY]
|
|
||||||
|
|
||||||
if context is not None and 'logger' in context.keys():
|
|
||||||
logger = context['logger']
|
|
||||||
else:
|
|
||||||
logger = get_ffx_logger()
|
|
||||||
|
|
||||||
|
|
||||||
indexSeparator = ' ' if indexSeasonDigits or indexEpisodeDigits else ''
|
|
||||||
seasonIndex = '{num:{fill}{width}}'.format(num=season, fill='0', width=indexSeasonDigits) if indexSeasonDigits else ''
|
|
||||||
episodeIndex = '{num:{fill}{width}}'.format(num=episode, fill='0', width=indexEpisodeDigits) if indexEpisodeDigits else ''
|
|
||||||
|
|
||||||
indicatorSeparator = ' - ' if indicatorSeasonDigits or indicatorEpisodeDigits else ''
|
|
||||||
seasonIndicator = 'S{num:{fill}{width}}'.format(num=season, fill='0', width=indicatorSeasonDigits) if indicatorSeasonDigits else ''
|
|
||||||
episodeIndicator = 'E{num:{fill}{width}}'.format(num=episode, fill='0', width=indicatorEpisodeDigits) if indicatorEpisodeDigits else ''
|
|
||||||
|
|
||||||
jinjaKwargs = {
|
|
||||||
'ffx_show_name': showName,
|
|
||||||
'ffx_index_separator': indexSeparator,
|
|
||||||
'ffx_season_index': str(seasonIndex),
|
|
||||||
'ffx_episode_index': str(episodeIndex),
|
|
||||||
'ffx_index': str(seasonIndex) + str(episodeIndex),
|
|
||||||
'ffx_episode_name': episodeName,
|
|
||||||
'ffx_indicator_separator': indicatorSeparator,
|
|
||||||
'ffx_season_indicator': str(seasonIndicator),
|
|
||||||
'ffx_episode_indicator': str(episodeIndicator),
|
|
||||||
'ffx_indicator': str(seasonIndicator) + str(episodeIndicator)
|
|
||||||
}
|
|
||||||
|
|
||||||
jinjaEnv = Environment(undefined=EmptyStringUndefined)
|
|
||||||
jinjaTemplate = jinjaEnv.from_string(outputFilenameTemplate)
|
|
||||||
return jinjaTemplate.render(**jinjaKwargs)
|
|
||||||
|
|
||||||
# return ''.join(filenameTokens)
|
|
||||||
|
|
||||||
|
|
||||||
def formatRichColor(text: str, color: str = None):
|
|
||||||
if color is None:
|
|
||||||
return text
|
|
||||||
else:
|
|
||||||
return f"[{color}]{text}[/{color}]"
|
|
||||||
|
|
||||||
def removeRichColor(text: str):
|
|
||||||
richColorMatch = RICH_COLOR_REGEX.search(str(text))
|
|
||||||
if richColorMatch is None:
|
|
||||||
return text
|
|
||||||
else:
|
|
||||||
return str(richColorMatch.group(1))
|
|
||||||
@@ -1,220 +0,0 @@
|
|||||||
from enum import Enum
|
|
||||||
import difflib
|
|
||||||
|
|
||||||
|
|
||||||
class IsoLanguage(Enum):
|
|
||||||
|
|
||||||
ABKHAZIAN = {"name": "Abkhazian", "iso639_1": "ab", "iso639_2": ["abk"]}
|
|
||||||
AFAR = {"name": "Afar", "iso639_1": "aa", "iso639_2": ["aar"]}
|
|
||||||
AFRIKAANS = {"name": "Afrikaans", "iso639_1": "af", "iso639_2": ["afr"]}
|
|
||||||
AKAN = {"name": "Akan", "iso639_1": "ak", "iso639_2": ["aka"]}
|
|
||||||
ALBANIAN = {"name": "Albanian", "iso639_1": "sq", "iso639_2": ["sqi", "alb"]}
|
|
||||||
AMHARIC = {"name": "Amharic", "iso639_1": "am", "iso639_2": ["amh"]}
|
|
||||||
ARABIC = {"name": "Arabic", "iso639_1": "ar", "iso639_2": ["ara"]}
|
|
||||||
ARAGONESE = {"name": "Aragonese", "iso639_1": "an", "iso639_2": ["arg"]}
|
|
||||||
ARMENIAN = {"name": "Armenian", "iso639_1": "hy", "iso639_2": ["hye", "arm"]}
|
|
||||||
ASSAMESE = {"name": "Assamese", "iso639_1": "as", "iso639_2": ["asm"]}
|
|
||||||
AVARIC = {"name": "Avaric", "iso639_1": "av", "iso639_2": ["ava"]}
|
|
||||||
AVESTAN = {"name": "Avestan", "iso639_1": "ae", "iso639_2": ["ave"]}
|
|
||||||
AYMARA = {"name": "Aymara", "iso639_1": "ay", "iso639_2": ["aym"]}
|
|
||||||
AZERBAIJANI = {"name": "Azerbaijani", "iso639_1": "az", "iso639_2": ["aze"]}
|
|
||||||
BAMBARA = {"name": "Bambara", "iso639_1": "bm", "iso639_2": ["bam"]}
|
|
||||||
BASHKIR = {"name": "Bashkir", "iso639_1": "ba", "iso639_2": ["bak"]}
|
|
||||||
BASQUE = {"name": "Basque", "iso639_1": "eu", "iso639_2": ["eus", "baq"]}
|
|
||||||
BELARUSIAN = {"name": "Belarusian", "iso639_1": "be", "iso639_2": ["bel"]}
|
|
||||||
BENGALI = {"name": "Bengali", "iso639_1": "bn", "iso639_2": ["ben"]}
|
|
||||||
BISLAMA = {"name": "Bislama", "iso639_1": "bi", "iso639_2": ["bis"]}
|
|
||||||
BOKMAL = {"name": "Bokmål", "iso639_1": "nb", "iso639_2": ["nob"]}
|
|
||||||
BOSNIAN = {"name": "Bosnian", "iso639_1": "bs", "iso639_2": ["bos"]}
|
|
||||||
BRETON = {"name": "Breton", "iso639_1": "br", "iso639_2": ["bre"]}
|
|
||||||
BULGARIAN = {"name": "Bulgarian", "iso639_1": "bg", "iso639_2": ["bul"]}
|
|
||||||
BURMESE = {"name": "Burmese", "iso639_1": "my", "iso639_2": ["mya", "bur"]}
|
|
||||||
CATALAN = {"name": "Catalan", "iso639_1": "ca", "iso639_2": ["cat"]}
|
|
||||||
CHAMORRO = {"name": "Chamorro", "iso639_1": "ch", "iso639_2": ["cha"]}
|
|
||||||
CHECHEN = {"name": "Chechen", "iso639_1": "ce", "iso639_2": ["che"]}
|
|
||||||
CHICHEWA = {"name": "Chichewa", "iso639_1": "ny", "iso639_2": ["nya"]}
|
|
||||||
CHINESE = {"name": "Chinese", "iso639_1": "zh", "iso639_2": ["zho", "chi"]}
|
|
||||||
CHURCH_SLAVIC = {"name": "Church Slavic", "iso639_1": "cu", "iso639_2": ["chu"]}
|
|
||||||
CHUVASH = {"name": "Chuvash", "iso639_1": "cv", "iso639_2": ["chv"]}
|
|
||||||
CORNISH = {"name": "Cornish", "iso639_1": "kw", "iso639_2": ["cor"]}
|
|
||||||
CORSICAN = {"name": "Corsican", "iso639_1": "co", "iso639_2": ["cos"]}
|
|
||||||
CREE = {"name": "Cree", "iso639_1": "cr", "iso639_2": ["cre"]}
|
|
||||||
CROATIAN = {"name": "Croatian", "iso639_1": "hr", "iso639_2": ["hrv"]}
|
|
||||||
CZECH = {"name": "Czech", "iso639_1": "cs", "iso639_2": ["ces", "cze"]}
|
|
||||||
DANISH = {"name": "Danish", "iso639_1": "da", "iso639_2": ["dan"]}
|
|
||||||
DIVEHI = {"name": "Divehi", "iso639_1": "dv", "iso639_2": ["div"]}
|
|
||||||
DUTCH = {"name": "Dutch", "iso639_1": "nl", "iso639_2": ["nld", "dut"]}
|
|
||||||
DZONGKHA = {"name": "Dzongkha", "iso639_1": "dz", "iso639_2": ["dzo"]}
|
|
||||||
ENGLISH = {"name": "English", "iso639_1": "en", "iso639_2": ["eng"]}
|
|
||||||
ESPERANTO = {"name": "Esperanto", "iso639_1": "eo", "iso639_2": ["epo"]}
|
|
||||||
ESTONIAN = {"name": "Estonian", "iso639_1": "et", "iso639_2": ["est"]}
|
|
||||||
EWE = {"name": "Ewe", "iso639_1": "ee", "iso639_2": ["ewe"]}
|
|
||||||
FAROESE = {"name": "Faroese", "iso639_1": "fo", "iso639_2": ["fao"]}
|
|
||||||
FIJIAN = {"name": "Fijian", "iso639_1": "fj", "iso639_2": ["fij"]}
|
|
||||||
FINNISH = {"name": "Finnish", "iso639_1": "fi", "iso639_2": ["fin"]}
|
|
||||||
FRENCH = {"name": "French", "iso639_1": "fr", "iso639_2": ["fra", "fre"]}
|
|
||||||
FULAH = {"name": "Fulah", "iso639_1": "ff", "iso639_2": ["ful"]}
|
|
||||||
GALICIAN = {"name": "Galician", "iso639_1": "gl", "iso639_2": ["glg"]}
|
|
||||||
GANDA = {"name": "Ganda", "iso639_1": "lg", "iso639_2": ["lug"]}
|
|
||||||
GEORGIAN = {"name": "Georgian", "iso639_1": "ka", "iso639_2": ["kat", "geo"]}
|
|
||||||
GERMAN = {"name": "German", "iso639_1": "de", "iso639_2": ["deu", "ger"]}
|
|
||||||
GREEK = {"name": "Greek", "iso639_1": "el", "iso639_2": ["ell", "gre"]}
|
|
||||||
GUARANI = {"name": "Guarani", "iso639_1": "gn", "iso639_2": ["grn"]}
|
|
||||||
GUJARATI = {"name": "Gujarati", "iso639_1": "gu", "iso639_2": ["guj"]}
|
|
||||||
HAITIAN = {"name": "Haitian", "iso639_1": "ht", "iso639_2": ["hat"]}
|
|
||||||
HAUSA = {"name": "Hausa", "iso639_1": "ha", "iso639_2": ["hau"]}
|
|
||||||
HEBREW = {"name": "Hebrew", "iso639_1": "he", "iso639_2": ["heb"]}
|
|
||||||
HERERO = {"name": "Herero", "iso639_1": "hz", "iso639_2": ["her"]}
|
|
||||||
HINDI = {"name": "Hindi", "iso639_1": "hi", "iso639_2": ["hin"]}
|
|
||||||
HIRI_MOTU = {"name": "Hiri Motu", "iso639_1": "ho", "iso639_2": ["hmo"]}
|
|
||||||
HUNGARIAN = {"name": "Hungarian", "iso639_1": "hu", "iso639_2": ["hun"]}
|
|
||||||
ICELANDIC = {"name": "Icelandic", "iso639_1": "is", "iso639_2": ["isl", "ice"]}
|
|
||||||
IDO = {"name": "Ido", "iso639_1": "io", "iso639_2": ["ido"]}
|
|
||||||
IGBO = {"name": "Igbo", "iso639_1": "ig", "iso639_2": ["ibo"]}
|
|
||||||
INDONESIAN = {"name": "Indonesian", "iso639_1": "id", "iso639_2": ["ind"]}
|
|
||||||
INTERLINGUA = {"name": "Interlingua", "iso639_1": "ia", "iso639_2": ["ina"]}
|
|
||||||
INTERLINGUE = {"name": "Interlingue", "iso639_1": "ie", "iso639_2": ["ile"]}
|
|
||||||
INUKTITUT = {"name": "Inuktitut", "iso639_1": "iu", "iso639_2": ["iku"]}
|
|
||||||
INUPIAQ = {"name": "Inupiaq", "iso639_1": "ik", "iso639_2": ["ipk"]}
|
|
||||||
IRISH = {"name": "Irish", "iso639_1": "ga", "iso639_2": ["gle"]}
|
|
||||||
ITALIAN = {"name": "Italian", "iso639_1": "it", "iso639_2": ["ita"]}
|
|
||||||
JAPANESE = {"name": "Japanese", "iso639_1": "ja", "iso639_2": ["jpn"]}
|
|
||||||
JAVANESE = {"name": "Javanese", "iso639_1": "jv", "iso639_2": ["jav"]}
|
|
||||||
KALAALLISUT = {"name": "Kalaallisut", "iso639_1": "kl", "iso639_2": ["kal"]}
|
|
||||||
KANNADA = {"name": "Kannada", "iso639_1": "kn", "iso639_2": ["kan"]}
|
|
||||||
KANURI = {"name": "Kanuri", "iso639_1": "kr", "iso639_2": ["kau"]}
|
|
||||||
KASHMIRI = {"name": "Kashmiri", "iso639_1": "ks", "iso639_2": ["kas"]}
|
|
||||||
KAZAKH = {"name": "Kazakh", "iso639_1": "kk", "iso639_2": ["kaz"]}
|
|
||||||
KHMER = {"name": "Khmer", "iso639_1": "km", "iso639_2": ["khm"]}
|
|
||||||
KIKUYU = {"name": "Kikuyu", "iso639_1": "ki", "iso639_2": ["kik"]}
|
|
||||||
KINYARWANDA = {"name": "Kinyarwanda", "iso639_1": "rw", "iso639_2": ["kin"]}
|
|
||||||
KIRGHIZ = {"name": "Kirghiz", "iso639_1": "ky", "iso639_2": ["kir"]}
|
|
||||||
KOMI = {"name": "Komi", "iso639_1": "kv", "iso639_2": ["kom"]}
|
|
||||||
KONGO = {"name": "Kongo", "iso639_1": "kg", "iso639_2": ["kon"]}
|
|
||||||
KOREAN = {"name": "Korean", "iso639_1": "ko", "iso639_2": ["kor"]}
|
|
||||||
KUANYAMA = {"name": "Kuanyama", "iso639_1": "kj", "iso639_2": ["kua"]}
|
|
||||||
KURDISH = {"name": "Kurdish", "iso639_1": "ku", "iso639_2": ["kur"]}
|
|
||||||
LAO = {"name": "Lao", "iso639_1": "lo", "iso639_2": ["lao"]}
|
|
||||||
LATIN = {"name": "Latin", "iso639_1": "la", "iso639_2": ["lat"]}
|
|
||||||
LATVIAN = {"name": "Latvian", "iso639_1": "lv", "iso639_2": ["lav"]}
|
|
||||||
LIMBURGAN = {"name": "Limburgan", "iso639_1": "li", "iso639_2": ["lim"]}
|
|
||||||
LINGALA = {"name": "Lingala", "iso639_1": "ln", "iso639_2": ["lin"]}
|
|
||||||
LITHUANIAN = {"name": "Lithuanian", "iso639_1": "lt", "iso639_2": ["lit"]}
|
|
||||||
LUBA_KATANGA = {"name": "Luba-Katanga", "iso639_1": "lu", "iso639_2": ["lub"]}
|
|
||||||
LUXEMBOURGISH = {"name": "Luxembourgish", "iso639_1": "lb", "iso639_2": ["ltz"]}
|
|
||||||
MACEDONIAN = {"name": "Macedonian", "iso639_1": "mk", "iso639_2": ["mkd", "mac"]}
|
|
||||||
MALAGASY = {"name": "Malagasy", "iso639_1": "mg", "iso639_2": ["mlg"]}
|
|
||||||
MALAY = {"name": "Malay", "iso639_1": "ms", "iso639_2": ["msa", "may"]}
|
|
||||||
MALAYALAM = {"name": "Malayalam", "iso639_1": "ml", "iso639_2": ["mal"]}
|
|
||||||
MALTESE = {"name": "Maltese", "iso639_1": "mt", "iso639_2": ["mlt"]}
|
|
||||||
MANX = {"name": "Manx", "iso639_1": "gv", "iso639_2": ["glv"]}
|
|
||||||
MAORI = {"name": "Maori", "iso639_1": "mi", "iso639_2": ["mri", "mao"]}
|
|
||||||
MARATHI = {"name": "Marathi", "iso639_1": "mr", "iso639_2": ["mar"]}
|
|
||||||
MARSHALLESE = {"name": "Marshallese", "iso639_1": "mh", "iso639_2": ["mah"]}
|
|
||||||
MONGOLIAN = {"name": "Mongolian", "iso639_1": "mn", "iso639_2": ["mon"]}
|
|
||||||
NAURU = {"name": "Nauru", "iso639_1": "na", "iso639_2": ["nau"]}
|
|
||||||
NAVAJO = {"name": "Navajo", "iso639_1": "nv", "iso639_2": ["nav"]}
|
|
||||||
NDONGA = {"name": "Ndonga", "iso639_1": "ng", "iso639_2": ["ndo"]}
|
|
||||||
NEPALI = {"name": "Nepali", "iso639_1": "ne", "iso639_2": ["nep"]}
|
|
||||||
NORTH_NDEBELE = {"name": "North Ndebele", "iso639_1": "nd", "iso639_2": ["nde"]}
|
|
||||||
NORTHERN_SAMI = {"name": "Northern Sami", "iso639_1": "se", "iso639_2": ["sme"]}
|
|
||||||
NORWEGIAN = {"name": "Norwegian", "iso639_1": "no", "iso639_2": ["nor"]}
|
|
||||||
NORWEGIAN_NYNORSK = {"name": "Nynorsk", "iso639_1": "nn", "iso639_2": ["nno"]}
|
|
||||||
OCCITAN = {"name": "Occitan", "iso639_1": "oc", "iso639_2": ["oci"]}
|
|
||||||
OJIBWA = {"name": "Ojibwa", "iso639_1": "oj", "iso639_2": ["oji"]}
|
|
||||||
ORIYA = {"name": "Oriya", "iso639_1": "or", "iso639_2": ["ori"]}
|
|
||||||
OROMO = {"name": "Oromo", "iso639_1": "om", "iso639_2": ["orm"]}
|
|
||||||
OSSETIAN = {"name": "Ossetian", "iso639_1": "os", "iso639_2": ["oss"]}
|
|
||||||
PALI = {"name": "Pali", "iso639_1": "pi", "iso639_2": ["pli"]}
|
|
||||||
PANJABI = {"name": "Panjabi", "iso639_1": "pa", "iso639_2": ["pan"]}
|
|
||||||
PERSIAN = {"name": "Persian", "iso639_1": "fa", "iso639_2": ["fas", "per"]}
|
|
||||||
POLISH = {"name": "Polish", "iso639_1": "pl", "iso639_2": ["pol"]}
|
|
||||||
PORTUGUESE = {"name": "Portuguese", "iso639_1": "pt", "iso639_2": ["por"]}
|
|
||||||
PUSHTO = {"name": "Pushto", "iso639_1": "ps", "iso639_2": ["pus"]}
|
|
||||||
QUECHUA = {"name": "Quechua", "iso639_1": "qu", "iso639_2": ["que"]}
|
|
||||||
ROMANIAN = {"name": "Romanian", "iso639_1": "ro", "iso639_2": ["ron", "rum"]}
|
|
||||||
ROMANSH = {"name": "Romansh", "iso639_1": "rm", "iso639_2": ["roh"]}
|
|
||||||
RUNDI = {"name": "Rundi", "iso639_1": "rn", "iso639_2": ["run"]}
|
|
||||||
RUSSIAN = {"name": "Russian", "iso639_1": "ru", "iso639_2": ["rus"]}
|
|
||||||
SAMOAN = {"name": "Samoan", "iso639_1": "sm", "iso639_2": ["smo"]}
|
|
||||||
SANGO = {"name": "Sango", "iso639_1": "sg", "iso639_2": ["sag"]}
|
|
||||||
SANSKRIT = {"name": "Sanskrit", "iso639_1": "sa", "iso639_2": ["san"]}
|
|
||||||
SARDINIAN = {"name": "Sardinian", "iso639_1": "sc", "iso639_2": ["srd"]}
|
|
||||||
SCOTTISH_GAELIC = {"name": "Scottish Gaelic", "iso639_1": "gd", "iso639_2": ["gla"]}
|
|
||||||
SERBIAN = {"name": "Serbian", "iso639_1": "sr", "iso639_2": ["srp"]}
|
|
||||||
SHONA = {"name": "Shona", "iso639_1": "sn", "iso639_2": ["sna"]}
|
|
||||||
SICHUAN_YI = {"name": "Sichuan Yi", "iso639_1": "ii", "iso639_2": ["iii"]}
|
|
||||||
SINDHI = {"name": "Sindhi", "iso639_1": "sd", "iso639_2": ["snd"]}
|
|
||||||
SINHALA = {"name": "Sinhala", "iso639_1": "si", "iso639_2": ["sin"]}
|
|
||||||
SLOVAK = {"name": "Slovak", "iso639_1": "sk", "iso639_2": ["slk", "slo"]}
|
|
||||||
SLOVENIAN = {"name": "Slovenian", "iso639_1": "sl", "iso639_2": ["slv"]}
|
|
||||||
SOMALI = {"name": "Somali", "iso639_1": "so", "iso639_2": ["som"]}
|
|
||||||
SOUTH_NDEBELE = {"name": "South Ndebele", "iso639_1": "nr", "iso639_2": ["nbl"]}
|
|
||||||
SOUTHERN_SOTHO = {"name": "Southern Sotho", "iso639_1": "st", "iso639_2": ["sot"]}
|
|
||||||
SPANISH = {"name": "Spanish", "iso639_1": "es", "iso639_2": ["spa"]}
|
|
||||||
SUNDANESE = {"name": "Sundanese", "iso639_1": "su", "iso639_2": ["sun"]}
|
|
||||||
SWAHILI = {"name": "Swahili", "iso639_1": "sw", "iso639_2": ["swa"]}
|
|
||||||
SWATI = {"name": "Swati", "iso639_1": "ss", "iso639_2": ["ssw"]}
|
|
||||||
SWEDISH = {"name": "Swedish", "iso639_1": "sv", "iso639_2": ["swe"]}
|
|
||||||
TAGALOG = {"name": "Tagalog", "iso639_1": "tl", "iso639_2": ["tgl"]}
|
|
||||||
TAHITIAN = {"name": "Tahitian", "iso639_1": "ty", "iso639_2": ["tah"]}
|
|
||||||
TAJIK = {"name": "Tajik", "iso639_1": "tg", "iso639_2": ["tgk"]}
|
|
||||||
TAMIL = {"name": "Tamil", "iso639_1": "ta", "iso639_2": ["tam"]}
|
|
||||||
TATAR = {"name": "Tatar", "iso639_1": "tt", "iso639_2": ["tat"]}
|
|
||||||
TELUGU = {"name": "Telugu", "iso639_1": "te", "iso639_2": ["tel"]}
|
|
||||||
THAI = {"name": "Thai", "iso639_1": "th", "iso639_2": ["tha"]}
|
|
||||||
TIBETAN = {"name": "Tibetan", "iso639_1": "bo", "iso639_2": ["bod", "tib"]}
|
|
||||||
TIGRINYA = {"name": "Tigrinya", "iso639_1": "ti", "iso639_2": ["tir"]}
|
|
||||||
TONGA = {"name": "Tonga", "iso639_1": "to", "iso639_2": ["ton"]}
|
|
||||||
TSONGA = {"name": "Tsonga", "iso639_1": "ts", "iso639_2": ["tso"]}
|
|
||||||
TSWANA = {"name": "Tswana", "iso639_1": "tn", "iso639_2": ["tsn"]}
|
|
||||||
TURKISH = {"name": "Turkish", "iso639_1": "tr", "iso639_2": ["tur"]}
|
|
||||||
TURKMEN = {"name": "Turkmen", "iso639_1": "tk", "iso639_2": ["tuk"]}
|
|
||||||
TWI = {"name": "Twi", "iso639_1": "tw", "iso639_2": ["twi"]}
|
|
||||||
UIGHUR = {"name": "Uighur", "iso639_1": "ug", "iso639_2": ["uig"]}
|
|
||||||
UKRAINIAN = {"name": "Ukrainian", "iso639_1": "uk", "iso639_2": ["ukr"]}
|
|
||||||
URDU = {"name": "Urdu", "iso639_1": "ur", "iso639_2": ["urd"]}
|
|
||||||
UZBEK = {"name": "Uzbek", "iso639_1": "uz", "iso639_2": ["uzb"]}
|
|
||||||
VENDA = {"name": "Venda", "iso639_1": "ve", "iso639_2": ["ven"]}
|
|
||||||
VIETNAMESE = {"name": "Vietnamese", "iso639_1": "vi", "iso639_2": ["vie"]}
|
|
||||||
VOLAPUK = {"name": "Volapük", "iso639_1": "vo", "iso639_2": ["vol"]}
|
|
||||||
WALLOON = {"name": "Walloon", "iso639_1": "wa", "iso639_2": ["wln"]}
|
|
||||||
WELSH = {"name": "Welsh", "iso639_1": "cy", "iso639_2": ["cym", "wel"]}
|
|
||||||
WESTERN_FRISIAN = {"name": "Western Frisian", "iso639_1": "fy", "iso639_2": ["fry"]}
|
|
||||||
WOLOF = {"name": "Wolof", "iso639_1": "wo", "iso639_2": ["wol"]}
|
|
||||||
XHOSA = {"name": "Xhosa", "iso639_1": "xh", "iso639_2": ["xho"]}
|
|
||||||
YIDDISH = {"name": "Yiddish", "iso639_1": "yi", "iso639_2": ["yid"]}
|
|
||||||
YORUBA = {"name": "Yoruba", "iso639_1": "yo", "iso639_2": ["yor"]}
|
|
||||||
ZHUANG = {"name": "Zhuang", "iso639_1": "za", "iso639_2": ["zha"]}
|
|
||||||
ZULU = {"name": "Zulu", "iso639_1": "zu", "iso639_2": ["zul"]}
|
|
||||||
|
|
||||||
FILIPINO = {"name": "Filipino", "iso639_1": "tl", "iso639_2": ["fil"]}
|
|
||||||
|
|
||||||
UNDEFINED = {"name": "undefined", "iso639_1": "xx", "iso639_2": ["und"]}
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def find(label : str):
|
|
||||||
|
|
||||||
closestMatches = difflib.get_close_matches(label, [l.value["name"] for l in IsoLanguage], n=1)
|
|
||||||
|
|
||||||
if closestMatches:
|
|
||||||
foundLangs = [l for l in IsoLanguage if l.value["name"] == closestMatches[0]]
|
|
||||||
return foundLangs[0] if foundLangs else IsoLanguage.UNDEFINED
|
|
||||||
else:
|
|
||||||
return IsoLanguage.UNDEFINED
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def findThreeLetter(theeLetter : str):
|
|
||||||
foundLangs = [l for l in IsoLanguage if str(theeLetter) in l.value["iso639_2"]]
|
|
||||||
return foundLangs[0] if foundLangs else IsoLanguage.UNDEFINED
|
|
||||||
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
return str(self.value["name"])
|
|
||||||
|
|
||||||
def twoLetter(self):
|
|
||||||
return str(self.value["iso639_1"])
|
|
||||||
|
|
||||||
def threeLetter(self):
|
|
||||||
return str(self.value["iso639_2"][0])
|
|
||||||
@@ -1,68 +0,0 @@
|
|||||||
import logging
|
|
||||||
import os
|
|
||||||
|
|
||||||
|
|
||||||
FFX_LOGGER_NAME = "FFX"
|
|
||||||
CONSOLE_HANDLER_NAME = "ffx-console"
|
|
||||||
FILE_HANDLER_NAME = "ffx-file"
|
|
||||||
|
|
||||||
|
|
||||||
def get_ffx_logger(name: str = FFX_LOGGER_NAME) -> logging.Logger:
|
|
||||||
logger = logging.getLogger(name)
|
|
||||||
logger.setLevel(logging.DEBUG)
|
|
||||||
|
|
||||||
if not logger.handlers:
|
|
||||||
logger.addHandler(logging.NullHandler())
|
|
||||||
|
|
||||||
return logger
|
|
||||||
|
|
||||||
|
|
||||||
def configure_ffx_logger(
|
|
||||||
log_file_path: str,
|
|
||||||
file_level: int,
|
|
||||||
console_level: int,
|
|
||||||
name: str = FFX_LOGGER_NAME,
|
|
||||||
) -> logging.Logger:
|
|
||||||
logger = get_ffx_logger(name)
|
|
||||||
logger.propagate = False
|
|
||||||
|
|
||||||
for handler in list(logger.handlers):
|
|
||||||
if isinstance(handler, logging.NullHandler):
|
|
||||||
logger.removeHandler(handler)
|
|
||||||
|
|
||||||
console_handler = next(
|
|
||||||
(handler for handler in logger.handlers if handler.get_name() == CONSOLE_HANDLER_NAME),
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
if console_handler is None:
|
|
||||||
console_handler = logging.StreamHandler()
|
|
||||||
console_handler.set_name(CONSOLE_HANDLER_NAME)
|
|
||||||
logger.addHandler(console_handler)
|
|
||||||
|
|
||||||
console_handler.setLevel(console_level)
|
|
||||||
console_handler.setFormatter(logging.Formatter("%(message)s"))
|
|
||||||
|
|
||||||
normalized_log_path = os.path.abspath(log_file_path)
|
|
||||||
file_handler = next(
|
|
||||||
(handler for handler in logger.handlers if handler.get_name() == FILE_HANDLER_NAME),
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
if (
|
|
||||||
file_handler is not None
|
|
||||||
and os.path.abspath(file_handler.baseFilename) != normalized_log_path
|
|
||||||
):
|
|
||||||
logger.removeHandler(file_handler)
|
|
||||||
file_handler.close()
|
|
||||||
file_handler = None
|
|
||||||
|
|
||||||
if file_handler is None:
|
|
||||||
file_handler = logging.FileHandler(normalized_log_path)
|
|
||||||
file_handler.set_name(FILE_HANDLER_NAME)
|
|
||||||
logger.addHandler(file_handler)
|
|
||||||
|
|
||||||
file_handler.setLevel(file_level)
|
|
||||||
file_handler.setFormatter(
|
|
||||||
logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
|
|
||||||
)
|
|
||||||
|
|
||||||
return logger
|
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
|
|
||||||
from ffx.tag_controller import TagController
|
|
||||||
from ffx.track_controller import TrackController
|
|
||||||
|
|
||||||
class MediaController():
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__logger = context['logger']
|
|
||||||
|
|
||||||
self.__tc = TrackController(context = context)
|
|
||||||
self.__tac = TagController(context = context)
|
|
||||||
|
|
||||||
def setPatternMediaDescriptor(self, mediaDescriptor: MediaDescriptor, patternId: int):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
pid = int(patternId)
|
|
||||||
|
|
||||||
s = self.Session()
|
|
||||||
pattern = s.query(Pattern).filter(Pattern.id == pid).first()
|
|
||||||
|
|
||||||
if pattern is not None:
|
|
||||||
|
|
||||||
for mediaTagKey, mediaTagValue in mediaDescriptor.getTags():
|
|
||||||
self.__tac.updateMediaTag(pid, mediaTagKey, mediaTagValue)
|
|
||||||
# for trackDescriptor in mediaDescriptor.getAllTrackDescriptors():
|
|
||||||
for trackDescriptor in mediaDescriptor.getTrackDescriptors():
|
|
||||||
self.__tc.addTrack(trackDescriptor, patternId = pid)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
self.__logger.error(f"MediaController.setPatternMediaDescriptor(): {repr(ex)}")
|
|
||||||
raise click.ClickException(f"MediaController.setPatternMediaDescriptor(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
@@ -1,556 +0,0 @@
|
|||||||
import os, re, click
|
|
||||||
|
|
||||||
from typing import List, Self
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
from ffx.iso_language import IsoLanguage
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.logging_utils import get_ffx_logger
|
|
||||||
|
|
||||||
|
|
||||||
class MediaDescriptor:
|
|
||||||
"""This class represents the structural content of a media file including streams and metadata"""
|
|
||||||
|
|
||||||
CONTEXT_KEY = "context"
|
|
||||||
|
|
||||||
TAGS_KEY = "tags"
|
|
||||||
TRACKS_KEY = "tracks"
|
|
||||||
|
|
||||||
TRACK_DESCRIPTOR_LIST_KEY = "track_descriptors"
|
|
||||||
ATTACHMENT_DESCRIPTOR_LIST_KEY = "attachment_descriptors"
|
|
||||||
CLEAR_TAGS_FLAG_KEY = "clear_tags"
|
|
||||||
|
|
||||||
FFPROBE_DISPOSITION_KEY = "disposition"
|
|
||||||
FFPROBE_TAGS_KEY = "tags"
|
|
||||||
FFPROBE_CODEC_TYPE_KEY = "codec_type"
|
|
||||||
|
|
||||||
#407 remove as well
|
|
||||||
EXCLUDED_MEDIA_TAGS = ["creation_time"]
|
|
||||||
|
|
||||||
SEASON_EPISODE_STREAM_LANGUAGE_DISPOSITIONS_MATCH = '[sS]([0-9]+)[eE]([0-9]+)_([0-9]+)_([a-z]{3})(?:_([A-Z]{3}))*'
|
|
||||||
STREAM_LANGUAGE_DISPOSITIONS_MATCH = '([0-9]+)_([a-z]{3})(?:_([A-Z]{3}))*'
|
|
||||||
|
|
||||||
SUBTITLE_FILE_EXTENSION = 'vtt'
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
if MediaDescriptor.CONTEXT_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[MediaDescriptor.CONTEXT_KEY]) is not dict:
|
|
||||||
raise TypeError(
|
|
||||||
f"MediaDescriptor.__init__(): Argument {MediaDescriptor.CONTEXT_KEY} is required to be of type dict"
|
|
||||||
)
|
|
||||||
self.__context = kwargs[MediaDescriptor.CONTEXT_KEY]
|
|
||||||
self.__logger = self.__context['logger']
|
|
||||||
else:
|
|
||||||
self.__context = {}
|
|
||||||
self.__logger = get_ffx_logger()
|
|
||||||
|
|
||||||
if MediaDescriptor.TAGS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[MediaDescriptor.TAGS_KEY]) is not dict:
|
|
||||||
raise TypeError(
|
|
||||||
f"MediaDescriptor.__init__(): Argument {MediaDescriptor.TAGS_KEY} is required to be of type dict"
|
|
||||||
)
|
|
||||||
self.__mediaTags = kwargs[MediaDescriptor.TAGS_KEY]
|
|
||||||
else:
|
|
||||||
self.__mediaTags = {}
|
|
||||||
|
|
||||||
if MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY in kwargs.keys():
|
|
||||||
if (
|
|
||||||
type(kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY]) is not list
|
|
||||||
): # Use List typehint for TrackDescriptor as well if it works
|
|
||||||
raise TypeError(
|
|
||||||
f"MediaDescriptor.__init__(): Argument {MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY} is required to be of type list"
|
|
||||||
)
|
|
||||||
for d in kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY]:
|
|
||||||
if type(d) is not TrackDescriptor:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): All elements of argument list {MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY} are required to be of type TrackDescriptor"
|
|
||||||
)
|
|
||||||
self.__trackDescriptors: List[TrackDescriptor] = kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY]
|
|
||||||
else:
|
|
||||||
self.__trackDescriptors: List[TrackDescriptor] = []
|
|
||||||
|
|
||||||
def setTrackLanguage(self, language: str, index: int, trackType: TrackType = None):
|
|
||||||
|
|
||||||
trackLanguage = IsoLanguage.findThreeLetter(language)
|
|
||||||
if trackLanguage == IsoLanguage.UNDEFINED:
|
|
||||||
self.__logger.warning('MediaDescriptor.setTrackLanguage(): Parameter language does not contain a registered '
|
|
||||||
+ f"ISO 639 3-letter language code, skipping to set language for"
|
|
||||||
+ str('' if trackType is None else trackType.label()) + f"track {index}")
|
|
||||||
|
|
||||||
trackList = self.getTrackDescriptors(trackType=trackType)
|
|
||||||
|
|
||||||
if index < 0 or index > len(trackList) - 1:
|
|
||||||
self.__logger.warning(f"MediaDescriptor.setTrackLanguage(): Parameter index ({index}) is "
|
|
||||||
+ f"out of range of {'' if trackType is None else trackType.label()}track list")
|
|
||||||
|
|
||||||
td: TrackDescriptor = trackList[index]
|
|
||||||
td.setLanguage(trackLanguage)
|
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
def setTrackTitle(self, title: str, index: int, trackType: TrackType = None):
|
|
||||||
|
|
||||||
trackList = self.getTrackDescriptors(trackType=trackType)
|
|
||||||
|
|
||||||
if index < 0 or index > len(trackList) - 1:
|
|
||||||
self.__logger.error(f"MediaDescriptor.setTrackTitle(): Parameter index ({index}) is "
|
|
||||||
+ f"out of range of {'' if trackType is None else trackType.label()}track list")
|
|
||||||
raise click.Abort()
|
|
||||||
|
|
||||||
td: TrackDescriptor = trackList[index]
|
|
||||||
td.setTitle(title)
|
|
||||||
|
|
||||||
|
|
||||||
def setDefaultSubTrack(self, trackType: TrackType, subIndex: int):
|
|
||||||
# for t in self.getAllTrackDescriptors():
|
|
||||||
for t in self.getTrackDescriptors():
|
|
||||||
if t.getType() == trackType:
|
|
||||||
t.setDispositionFlag(
|
|
||||||
TrackDisposition.DEFAULT, t.getSubIndex() == int(subIndex)
|
|
||||||
)
|
|
||||||
|
|
||||||
def setForcedSubTrack(self, trackType: TrackType, subIndex: int):
|
|
||||||
# for t in self.getAllTrackDescriptors():
|
|
||||||
for t in self.getTrackDescriptors():
|
|
||||||
if t.getType() == trackType:
|
|
||||||
t.setDispositionFlag(
|
|
||||||
TrackDisposition.FORCED, t.getSubIndex() == int(subIndex)
|
|
||||||
)
|
|
||||||
|
|
||||||
def checkConfiguration(self):
|
|
||||||
|
|
||||||
videoTracks = self.getVideoTracks()
|
|
||||||
audioTracks = self.getAudioTracks()
|
|
||||||
subtitleTracks = self.getSubtitleTracks()
|
|
||||||
|
|
||||||
if len([v for v in videoTracks if v.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
raise ValueError('More than one default video track')
|
|
||||||
if len([a for a in audioTracks if a.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
raise ValueError('More than one default audio track')
|
|
||||||
if len([s for s in subtitleTracks if s.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
raise ValueError('More than one default subtitle track')
|
|
||||||
|
|
||||||
if len([v for v in videoTracks if v.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
raise ValueError('More than one forced video track')
|
|
||||||
if len([a for a in audioTracks if a.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
raise ValueError('More than one forced audio track')
|
|
||||||
if len([s for s in subtitleTracks if s.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
raise ValueError('More than one forced subtitle track')
|
|
||||||
|
|
||||||
trackDescriptors = videoTracks + audioTracks + subtitleTracks
|
|
||||||
sourceIndices = [
|
|
||||||
t.getSourceIndex() for t in trackDescriptors
|
|
||||||
]
|
|
||||||
if len(set(sourceIndices)) < len(trackDescriptors):
|
|
||||||
raise ValueError('Multiple streams originating from the same source stream')
|
|
||||||
|
|
||||||
|
|
||||||
def applyOverrides(self, overrides: dict):
|
|
||||||
|
|
||||||
if 'languages' in overrides.keys():
|
|
||||||
for trackIndex in overrides['languages'].keys():
|
|
||||||
self.setTrackLanguage(overrides['languages'][trackIndex], trackIndex)
|
|
||||||
|
|
||||||
if 'titles' in overrides.keys():
|
|
||||||
for trackIndex in overrides['titles'].keys():
|
|
||||||
self.setTrackTitle(overrides['titles'][trackIndex], trackIndex)
|
|
||||||
|
|
||||||
if 'forced_video' in overrides.keys():
|
|
||||||
sti = int(overrides['forced_video'])
|
|
||||||
self.setForcedSubTrack(TrackType.VIDEO, sti)
|
|
||||||
self.setDefaultSubTrack(TrackType.VIDEO, sti)
|
|
||||||
|
|
||||||
elif 'default_video' in overrides.keys():
|
|
||||||
sti = int(overrides['default_video'])
|
|
||||||
self.setDefaultSubTrack(TrackType.VIDEO, sti)
|
|
||||||
|
|
||||||
if 'forced_audio' in overrides.keys():
|
|
||||||
sti = int(overrides['forced_audio'])
|
|
||||||
self.setForcedSubTrack(TrackType.AUDIO, sti)
|
|
||||||
self.setDefaultSubTrack(TrackType.AUDIO, sti)
|
|
||||||
|
|
||||||
elif 'default_audio' in overrides.keys():
|
|
||||||
sti = int(overrides['default_audio'])
|
|
||||||
self.setDefaultSubTrack(TrackType.AUDIO, sti)
|
|
||||||
|
|
||||||
if 'forced_subtitle' in overrides.keys():
|
|
||||||
sti = int(overrides['forced_subtitle'])
|
|
||||||
self.setForcedSubTrack(TrackType.SUBTITLE, sti)
|
|
||||||
self.setDefaultSubTrack(TrackType.SUBTITLE, sti)
|
|
||||||
|
|
||||||
elif 'default_subtitle' in overrides.keys():
|
|
||||||
sti = int(overrides['default_subtitle'])
|
|
||||||
self.setDefaultSubTrack(TrackType.SUBTITLE, sti)
|
|
||||||
|
|
||||||
if 'stream_order' in overrides.keys():
|
|
||||||
self.rearrangeTrackDescriptors(overrides['stream_order'])
|
|
||||||
|
|
||||||
|
|
||||||
def applySourceIndices(self, sourceMediaDescriptor: Self):
|
|
||||||
# sourceTrackDescriptors = sourceMediaDescriptor.getAllTrackDescriptors()
|
|
||||||
sourceTrackDescriptors = sourceMediaDescriptor.getTrackDescriptors()
|
|
||||||
|
|
||||||
numTrackDescriptors = len(self.__trackDescriptors)
|
|
||||||
if len(sourceTrackDescriptors) != numTrackDescriptors:
|
|
||||||
raise ValueError('MediaDescriptor.applySourceIndices (): Number of track descriptors does not match')
|
|
||||||
|
|
||||||
for trackIndex in range(numTrackDescriptors):
|
|
||||||
self.__trackDescriptors[trackIndex].setSourceIndex(sourceTrackDescriptors[trackIndex].getSourceIndex())
|
|
||||||
|
|
||||||
|
|
||||||
def rearrangeTrackDescriptors(self, newOrder: List[int]):
|
|
||||||
if len(newOrder) != len(self.__trackDescriptors):
|
|
||||||
raise ValueError('Length of list with reordered indices does not match number of track descriptors')
|
|
||||||
reorderedTrackDescriptors = []
|
|
||||||
for oldIndex in newOrder:
|
|
||||||
reorderedTrackDescriptors.append(self.__trackDescriptors[oldIndex])
|
|
||||||
self.__trackDescriptors = reorderedTrackDescriptors
|
|
||||||
self.reindexSubIndices()
|
|
||||||
self.reindexIndices()
|
|
||||||
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def fromFfprobe(cls, context, formatData, streamData):
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
kwargs[MediaDescriptor.CONTEXT_KEY] = context
|
|
||||||
|
|
||||||
if MediaDescriptor.FFPROBE_TAGS_KEY in formatData.keys():
|
|
||||||
kwargs[MediaDescriptor.TAGS_KEY] = formatData[
|
|
||||||
MediaDescriptor.FFPROBE_TAGS_KEY
|
|
||||||
]
|
|
||||||
|
|
||||||
kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY] = []
|
|
||||||
|
|
||||||
# TODO: Evtl obsolet
|
|
||||||
subIndexCounters = {}
|
|
||||||
|
|
||||||
for streamObj in streamData:
|
|
||||||
|
|
||||||
ffprobeCodecType = streamObj[MediaDescriptor.FFPROBE_CODEC_TYPE_KEY]
|
|
||||||
trackType = TrackType.fromLabel(ffprobeCodecType)
|
|
||||||
|
|
||||||
if trackType != TrackType.UNKNOWN:
|
|
||||||
|
|
||||||
if trackType not in subIndexCounters.keys():
|
|
||||||
subIndexCounters[trackType] = 0
|
|
||||||
|
|
||||||
kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY].append(
|
|
||||||
TrackDescriptor.fromFfprobe(
|
|
||||||
streamObj, subIndex=subIndexCounters[trackType]
|
|
||||||
)
|
|
||||||
)
|
|
||||||
subIndexCounters[trackType] += 1
|
|
||||||
|
|
||||||
return cls(**kwargs)
|
|
||||||
|
|
||||||
def getTags(self):
|
|
||||||
return self.__mediaTags
|
|
||||||
|
|
||||||
|
|
||||||
def sortSubIndices(
|
|
||||||
self, descriptors: List[TrackDescriptor]
|
|
||||||
) -> List[TrackDescriptor]:
|
|
||||||
subIndex = 0
|
|
||||||
for d in descriptors:
|
|
||||||
d.setSubIndex(subIndex)
|
|
||||||
subIndex += 1
|
|
||||||
return descriptors
|
|
||||||
|
|
||||||
def reindexSubIndices(self, trackDescriptors: list = []):
|
|
||||||
tdList = trackDescriptors if trackDescriptors else self.__trackDescriptors
|
|
||||||
subIndexCounter = {}
|
|
||||||
for td in tdList:
|
|
||||||
trackType = td.getType()
|
|
||||||
if trackType not in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
td.setSubIndex(subIndexCounter[trackType])
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
def sortIndices(
|
|
||||||
self, descriptors: List[TrackDescriptor]
|
|
||||||
) -> List[TrackDescriptor]:
|
|
||||||
index = 0
|
|
||||||
for d in descriptors:
|
|
||||||
d.setIndex(index)
|
|
||||||
index += 1
|
|
||||||
return descriptors
|
|
||||||
|
|
||||||
def reindexIndices(self, trackDescriptors: list = []):
|
|
||||||
tdList = trackDescriptors if trackDescriptors else self.__trackDescriptors
|
|
||||||
for trackIndex in range(len(tdList)):
|
|
||||||
tdList[trackIndex].setIndex(trackIndex)
|
|
||||||
|
|
||||||
|
|
||||||
# def getAllTrackDescriptors(self):
|
|
||||||
# """Returns all track descriptors sorted by type: video, audio then subtitles"""
|
|
||||||
# return self.getVideoTracks() + self.getAudioTracks() + self.getSubtitleTracks()
|
|
||||||
|
|
||||||
|
|
||||||
def getTrackDescriptors(self,
|
|
||||||
trackType: TrackType = None) -> List[TrackDescriptor]:
|
|
||||||
|
|
||||||
if trackType is None:
|
|
||||||
return self.__trackDescriptors
|
|
||||||
|
|
||||||
descriptorList = []
|
|
||||||
for td in self.__trackDescriptors:
|
|
||||||
if td.getType() == trackType:
|
|
||||||
descriptorList.append(td)
|
|
||||||
|
|
||||||
return descriptorList
|
|
||||||
|
|
||||||
|
|
||||||
def getVideoTracks(self) -> List[TrackDescriptor]:
|
|
||||||
return [v for v in self.__trackDescriptors if v.getType() == TrackType.VIDEO]
|
|
||||||
|
|
||||||
def getAudioTracks(self) -> List[TrackDescriptor]:
|
|
||||||
return [a for a in self.__trackDescriptors if a.getType() == TrackType.AUDIO]
|
|
||||||
|
|
||||||
def getSubtitleTracks(self) -> List[TrackDescriptor]:
|
|
||||||
return [
|
|
||||||
s
|
|
||||||
for s in self.__trackDescriptors
|
|
||||||
if s.getType() == TrackType.SUBTITLE
|
|
||||||
]
|
|
||||||
|
|
||||||
def getAttachmentTracks(self) -> List[TrackDescriptor]:
|
|
||||||
return [
|
|
||||||
s
|
|
||||||
for s in self.__trackDescriptors
|
|
||||||
if s.getType() == TrackType.ATTACHMENT
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def getImportFileTokens(self, use_sub_index: bool = True):
|
|
||||||
"""Generate ffmpeg import options for external stream files"""
|
|
||||||
|
|
||||||
importFileTokens = []
|
|
||||||
|
|
||||||
td: TrackDescriptor
|
|
||||||
for td in self.__trackDescriptors:
|
|
||||||
|
|
||||||
importedFilePath = td.getExternalSourceFilePath()
|
|
||||||
|
|
||||||
if importedFilePath:
|
|
||||||
|
|
||||||
self.__logger.info(f"Substituting subtitle stream #{td.getIndex()} "
|
|
||||||
+ f"({td.getType().label()}:{td.getSubIndex()}) "
|
|
||||||
+ f"with import from file {td.getExternalSourceFilePath()}")
|
|
||||||
|
|
||||||
importFileTokens += [
|
|
||||||
"-i",
|
|
||||||
importedFilePath,
|
|
||||||
]
|
|
||||||
|
|
||||||
return importFileTokens
|
|
||||||
|
|
||||||
|
|
||||||
def getInputMappingTokens(self,
|
|
||||||
use_sub_index: bool = True,
|
|
||||||
only_video: bool = False,
|
|
||||||
sourceMediaDescriptor: Self = None):
|
|
||||||
"""Tracks must be reordered for source index order"""
|
|
||||||
|
|
||||||
inputMappingTokens = []
|
|
||||||
|
|
||||||
sortedTrackDescriptors = sorted(self.__trackDescriptors, key=lambda d: d.getIndex())
|
|
||||||
sourceTrackDescriptorsByIndex = {
|
|
||||||
td.getIndex(): td
|
|
||||||
for td in (
|
|
||||||
sourceMediaDescriptor.getTrackDescriptors()
|
|
||||||
if sourceMediaDescriptor is not None
|
|
||||||
else sortedTrackDescriptors
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
# raise click.ClickException(' '.join([f"\nindex={td.getIndex()} subIndex={td.getSubIndex()} srcIndex={td.getSourceIndex()} type={td.getType().label()}" for td in self.__trackDescriptors]))
|
|
||||||
|
|
||||||
filePointer = 1
|
|
||||||
for trackIndex in range(len(sortedTrackDescriptors)):
|
|
||||||
|
|
||||||
td: TrackDescriptor = sortedTrackDescriptors[trackIndex]
|
|
||||||
|
|
||||||
#HINT: Attached thumbnails are not supported by .webm container format
|
|
||||||
if td.getCodec() != TrackCodec.PNG:
|
|
||||||
|
|
||||||
sourceTrackDescriptor = sourceTrackDescriptorsByIndex.get(td.getSourceIndex())
|
|
||||||
if sourceTrackDescriptor is None:
|
|
||||||
raise ValueError(f"No source track descriptor found for source index {td.getSourceIndex()}")
|
|
||||||
|
|
||||||
stdi = sourceTrackDescriptor.getIndex()
|
|
||||||
stdsi = sourceTrackDescriptor.getSubIndex()
|
|
||||||
|
|
||||||
trackType = td.getType()
|
|
||||||
trackCodec = td.getCodec()
|
|
||||||
|
|
||||||
if (trackType != TrackType.ATTACHMENT
|
|
||||||
and (trackType == TrackType.VIDEO or not only_video)):
|
|
||||||
|
|
||||||
|
|
||||||
importedFilePath = td.getExternalSourceFilePath()
|
|
||||||
|
|
||||||
if use_sub_index:
|
|
||||||
|
|
||||||
if importedFilePath:
|
|
||||||
|
|
||||||
inputMappingTokens += [
|
|
||||||
"-map",
|
|
||||||
f"{filePointer}:{trackType.indicator()}:0",
|
|
||||||
]
|
|
||||||
filePointer += 1
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
if not trackCodec in [TrackCodec.PGS, TrackCodec.VOBSUB]:
|
|
||||||
inputMappingTokens += [
|
|
||||||
"-map",
|
|
||||||
f"0:{trackType.indicator()}:{stdsi}",
|
|
||||||
]
|
|
||||||
|
|
||||||
else:
|
|
||||||
if not trackCodec in [TrackCodec.PGS, TrackCodec.VOBSUB]:
|
|
||||||
inputMappingTokens += ["-map", f"0:{stdi}"]
|
|
||||||
|
|
||||||
if sourceMediaDescriptor:
|
|
||||||
fontDescriptors = [ftd for ftd in sourceMediaDescriptor.getAttachmentTracks()
|
|
||||||
if ftd.getCodec() == TrackCodec.TTF]
|
|
||||||
else:
|
|
||||||
fontDescriptors = [ftd for ftd in self.__trackDescriptors
|
|
||||||
if ftd.getType() == TrackType.ATTACHMENT
|
|
||||||
and ftd.getCodec() == TrackCodec.TTF]
|
|
||||||
|
|
||||||
for ad in sorted(fontDescriptors, key=lambda d: d.getIndex()):
|
|
||||||
inputMappingTokens += ["-map", f"0:{ad.getIndex()}"]
|
|
||||||
|
|
||||||
return inputMappingTokens
|
|
||||||
|
|
||||||
|
|
||||||
def searchSubtitleFiles(self, searchDirectory, prefix):
|
|
||||||
|
|
||||||
sesld_match = re.compile(f"{prefix}_{MediaDescriptor.SEASON_EPISODE_STREAM_LANGUAGE_DISPOSITIONS_MATCH}")
|
|
||||||
sld_match = re.compile(f"{prefix}_{MediaDescriptor.STREAM_LANGUAGE_DISPOSITIONS_MATCH}")
|
|
||||||
|
|
||||||
subtitleFileDescriptors = []
|
|
||||||
|
|
||||||
for subtitleFilename in os.listdir(searchDirectory):
|
|
||||||
if subtitleFilename.startswith(prefix) and subtitleFilename.endswith(
|
|
||||||
"." + MediaDescriptor.SUBTITLE_FILE_EXTENSION
|
|
||||||
):
|
|
||||||
|
|
||||||
sesld_result = sesld_match.search(subtitleFilename)
|
|
||||||
sld_result = None if not sesld_result is None else sld_match.search(subtitleFilename)
|
|
||||||
|
|
||||||
if not sesld_result is None:
|
|
||||||
|
|
||||||
subtitleFilePath = os.path.join(searchDirectory, subtitleFilename)
|
|
||||||
if os.path.isfile(subtitleFilePath):
|
|
||||||
|
|
||||||
subtitleFileDescriptor = {}
|
|
||||||
subtitleFileDescriptor["path"] = subtitleFilePath
|
|
||||||
subtitleFileDescriptor["season"] = int(sesld_result.group(1))
|
|
||||||
subtitleFileDescriptor["episode"] = int(sesld_result.group(2))
|
|
||||||
subtitleFileDescriptor["index"] = int(sesld_result.group(3))
|
|
||||||
subtitleFileDescriptor["language"] = sesld_result.group(4)
|
|
||||||
|
|
||||||
dispSet = set()
|
|
||||||
dispCaptGroups = sesld_result.groups()
|
|
||||||
numCaptGroups = len(dispCaptGroups)
|
|
||||||
if numCaptGroups > 4:
|
|
||||||
for groupIndex in range(numCaptGroups - 4):
|
|
||||||
disp = TrackDisposition.fromIndicator(dispCaptGroups[groupIndex + 4])
|
|
||||||
if disp is not None:
|
|
||||||
dispSet.add(disp)
|
|
||||||
subtitleFileDescriptor["disposition_set"] = dispSet
|
|
||||||
|
|
||||||
subtitleFileDescriptors.append(subtitleFileDescriptor)
|
|
||||||
|
|
||||||
if not sld_result is None:
|
|
||||||
|
|
||||||
subtitleFilePath = os.path.join(searchDirectory, subtitleFilename)
|
|
||||||
if os.path.isfile(subtitleFilePath):
|
|
||||||
|
|
||||||
subtitleFileDescriptor = {}
|
|
||||||
subtitleFileDescriptor["path"] = subtitleFilePath
|
|
||||||
subtitleFileDescriptor["index"] = int(sld_result.group(1))
|
|
||||||
subtitleFileDescriptor["language"] = sld_result.group(2)
|
|
||||||
|
|
||||||
dispSet = set()
|
|
||||||
dispCaptGroups = sld_result.groups()
|
|
||||||
numCaptGroups = len(dispCaptGroups)
|
|
||||||
if numCaptGroups > 2:
|
|
||||||
for groupIndex in range(numCaptGroups - 2):
|
|
||||||
disp = TrackDisposition.fromIndicator(dispCaptGroups[groupIndex + 2])
|
|
||||||
if disp is not None:
|
|
||||||
dispSet.add(disp)
|
|
||||||
subtitleFileDescriptor["disposition_set"] = dispSet
|
|
||||||
|
|
||||||
subtitleFileDescriptors.append(subtitleFileDescriptor)
|
|
||||||
|
|
||||||
|
|
||||||
self.__logger.debug(f"searchSubtitleFiles(): Available subtitle files {subtitleFileDescriptors}")
|
|
||||||
|
|
||||||
return subtitleFileDescriptors
|
|
||||||
|
|
||||||
|
|
||||||
def importSubtitles(self, searchDirectory, prefix, season: int = -1, episode: int = -1):
|
|
||||||
|
|
||||||
# click.echo(f"Season: {season} Episode: {episode}")
|
|
||||||
self.__logger.debug(f"importSubtitles(): Season: {season} Episode: {episode}")
|
|
||||||
|
|
||||||
availableFileSubtitleDescriptors = self.searchSubtitleFiles(searchDirectory, prefix)
|
|
||||||
|
|
||||||
self.__logger.debug(f"importSubtitles(): availableFileSubtitleDescriptors: {availableFileSubtitleDescriptors}")
|
|
||||||
|
|
||||||
subtitleTracks = self.getSubtitleTracks()
|
|
||||||
|
|
||||||
self.__logger.debug(f"importSubtitles(): subtitleTracks: {[s.getIndex() for s in subtitleTracks]}")
|
|
||||||
|
|
||||||
matchingSubtitleFileDescriptors = (
|
|
||||||
sorted(
|
|
||||||
[
|
|
||||||
d
|
|
||||||
for d in availableFileSubtitleDescriptors
|
|
||||||
if ((season == -1 and episode == -1)
|
|
||||||
or (
|
|
||||||
d.get("season") == int(season)
|
|
||||||
and d.get("episode") == int(episode)
|
|
||||||
))
|
|
||||||
],
|
|
||||||
key=lambda d: d["index"],
|
|
||||||
)
|
|
||||||
if availableFileSubtitleDescriptors
|
|
||||||
else []
|
|
||||||
)
|
|
||||||
|
|
||||||
self.__logger.debug(f"importSubtitles(): matchingSubtitleFileDescriptors: {matchingSubtitleFileDescriptors}")
|
|
||||||
|
|
||||||
for msfd in matchingSubtitleFileDescriptors:
|
|
||||||
matchingSubtitleTrackDescriptor = [s for s in subtitleTracks if s.getIndex() == msfd["index"]]
|
|
||||||
if matchingSubtitleTrackDescriptor:
|
|
||||||
# click.echo(f"Found matching subtitle file {msfd["path"]}\n")
|
|
||||||
self.__logger.debug(f"importSubtitles(): Found matching subtitle file {msfd['path']}")
|
|
||||||
matchingTrack = matchingSubtitleTrackDescriptor[0]
|
|
||||||
matchingTrack.setExternalSourceFilePath(msfd["path"])
|
|
||||||
|
|
||||||
# Prefer metadata coming from the external single-track source when
|
|
||||||
# it is provided explicitly by the filename contract.
|
|
||||||
matchingTrack.getTags()["language"] = msfd["language"]
|
|
||||||
if msfd["disposition_set"]:
|
|
||||||
matchingTrack.setDispositionSet(msfd["disposition_set"])
|
|
||||||
|
|
||||||
|
|
||||||
def getConfiguration(self, label: str = ''):
|
|
||||||
yield f"--- {label if label else 'MediaDescriptor '+str(id(self))} {' '.join([str(k)+'='+str(v) for k,v in self.__mediaTags.items()])}"
|
|
||||||
# for td in self.getAllTrackDescriptors():
|
|
||||||
for td in self.getTrackDescriptors():
|
|
||||||
yield (f"{td.getIndex()}:{td.getType().indicator()}:{td.getSubIndex()} "
|
|
||||||
+ '|'.join([d.indicator() for d in td.getDispositionSet()])
|
|
||||||
+ ' ' + ' '.join([str(k)+'='+str(v) for k,v in td.getTags().items()]))
|
|
||||||
@@ -1,346 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.iso_language import IsoLanguage
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
from ffx.helper import dictDiff, setDiff, DIFF_ADDED_KEY, DIFF_CHANGED_KEY, DIFF_REMOVED_KEY, DIFF_UNCHANGED_KEY
|
|
||||||
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
|
|
||||||
|
|
||||||
class MediaDescriptorChangeSet():
|
|
||||||
|
|
||||||
TAGS_KEY = "tags"
|
|
||||||
TRACKS_KEY = "tracks"
|
|
||||||
DISPOSITION_SET_KEY = "disposition_set"
|
|
||||||
|
|
||||||
TRACK_DESCRIPTOR_KEY = "track_descriptor"
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context,
|
|
||||||
targetMediaDescriptor: MediaDescriptor = None,
|
|
||||||
sourceMediaDescriptor: MediaDescriptor = None):
|
|
||||||
|
|
||||||
self.__context = context
|
|
||||||
self.__logger = context['logger']
|
|
||||||
|
|
||||||
self.__configurationData = self.__context['config'].getData()
|
|
||||||
|
|
||||||
metadataConfiguration = self.__configurationData['metadata'] if 'metadata' in self.__configurationData.keys() else {}
|
|
||||||
|
|
||||||
self.__signatureTags = metadataConfiguration['signature'] if 'signature' in metadataConfiguration.keys() else {}
|
|
||||||
self.__removeGlobalKeys = metadataConfiguration['remove'] if 'remove' in metadataConfiguration.keys() else []
|
|
||||||
self.__ignoreGlobalKeys = metadataConfiguration['ignore'] if 'ignore' in metadataConfiguration.keys() else []
|
|
||||||
self.__removeTrackKeys = (metadataConfiguration['streams']['remove']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'remove' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
self.__ignoreTrackKeys = (metadataConfiguration['streams']['ignore']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'ignore' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
|
|
||||||
|
|
||||||
self.__targetTrackDescriptors = targetMediaDescriptor.getTrackDescriptors() if targetMediaDescriptor is not None else []
|
|
||||||
self.__sourceTrackDescriptors = sourceMediaDescriptor.getTrackDescriptors() if sourceMediaDescriptor is not None else []
|
|
||||||
self.__targetTrackDescriptorsByIndex = {
|
|
||||||
trackDescriptor.getIndex(): trackDescriptor
|
|
||||||
for trackDescriptor in self.__targetTrackDescriptors
|
|
||||||
}
|
|
||||||
self.__sourceTrackDescriptorsByIndex = {
|
|
||||||
trackDescriptor.getIndex(): trackDescriptor
|
|
||||||
for trackDescriptor in self.__sourceTrackDescriptors
|
|
||||||
}
|
|
||||||
|
|
||||||
targetMediaTags = targetMediaDescriptor.getTags() if targetMediaDescriptor is not None else {}
|
|
||||||
sourceMediaTags = sourceMediaDescriptor.getTags() if sourceMediaDescriptor is not None else {}
|
|
||||||
|
|
||||||
|
|
||||||
self.__changeSetObj = {}
|
|
||||||
|
|
||||||
#if targetMediaDescriptor is not None:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#!!#
|
|
||||||
tagsDiff = dictDiff(sourceMediaTags,
|
|
||||||
targetMediaTags,
|
|
||||||
ignoreKeys=self.__ignoreGlobalKeys,
|
|
||||||
removeKeys=self.__removeGlobalKeys)
|
|
||||||
|
|
||||||
if tagsDiff:
|
|
||||||
self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY] = tagsDiff
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
self.__numTargetTracks = len(self.__targetTrackDescriptors)
|
|
||||||
|
|
||||||
# Current track configuration (of file)
|
|
||||||
|
|
||||||
self.__numSourceTracks = len(self.__sourceTrackDescriptors)
|
|
||||||
|
|
||||||
trackCompareResult = {}
|
|
||||||
|
|
||||||
for targetTrackDescriptor in self.__targetTrackDescriptors:
|
|
||||||
sourceTrackDescriptor = self.__sourceTrackDescriptorsByIndex.get(
|
|
||||||
targetTrackDescriptor.getSourceIndex()
|
|
||||||
)
|
|
||||||
|
|
||||||
if sourceTrackDescriptor is None:
|
|
||||||
if DIFF_ADDED_KEY not in trackCompareResult.keys():
|
|
||||||
trackCompareResult[DIFF_ADDED_KEY] = {}
|
|
||||||
trackCompareResult[DIFF_ADDED_KEY][targetTrackDescriptor.getIndex()] = targetTrackDescriptor
|
|
||||||
continue
|
|
||||||
|
|
||||||
trackDiff = self.compareTracks(targetTrackDescriptor, sourceTrackDescriptor)
|
|
||||||
if trackDiff:
|
|
||||||
if DIFF_CHANGED_KEY not in trackCompareResult.keys():
|
|
||||||
trackCompareResult[DIFF_CHANGED_KEY] = {}
|
|
||||||
trackCompareResult[DIFF_CHANGED_KEY][targetTrackDescriptor.getIndex()] = trackDiff
|
|
||||||
|
|
||||||
targetSourceIndices = {
|
|
||||||
targetTrackDescriptor.getSourceIndex()
|
|
||||||
for targetTrackDescriptor in self.__targetTrackDescriptors
|
|
||||||
}
|
|
||||||
for sourceTrackDescriptor in self.__sourceTrackDescriptors:
|
|
||||||
if sourceTrackDescriptor.getIndex() not in targetSourceIndices:
|
|
||||||
if DIFF_REMOVED_KEY not in trackCompareResult.keys():
|
|
||||||
trackCompareResult[DIFF_REMOVED_KEY] = {}
|
|
||||||
trackCompareResult[DIFF_REMOVED_KEY][sourceTrackDescriptor.getIndex()] = sourceTrackDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
if trackCompareResult:
|
|
||||||
self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY] = trackCompareResult
|
|
||||||
|
|
||||||
|
|
||||||
def compareTracks(self,
|
|
||||||
targetTrackDescriptor: TrackDescriptor = None,
|
|
||||||
sourceTrackDescriptor: TrackDescriptor = None):
|
|
||||||
|
|
||||||
sourceTrackTags = sourceTrackDescriptor.getTags() if sourceTrackDescriptor is not None else {}
|
|
||||||
targetTrackTags = (
|
|
||||||
self.normalizeTrackTags(targetTrackDescriptor.getTags())
|
|
||||||
if targetTrackDescriptor is not None
|
|
||||||
else {}
|
|
||||||
)
|
|
||||||
|
|
||||||
trackCompareResult = {}
|
|
||||||
|
|
||||||
tagsDiffResult = dictDiff(sourceTrackTags,
|
|
||||||
targetTrackTags,
|
|
||||||
ignoreKeys=self.__ignoreTrackKeys,
|
|
||||||
removeKeys=self.__removeTrackKeys)
|
|
||||||
|
|
||||||
if tagsDiffResult:
|
|
||||||
trackCompareResult[MediaDescriptorChangeSet.TAGS_KEY] = tagsDiffResult
|
|
||||||
|
|
||||||
sourceDispositionSet = sourceTrackDescriptor.getDispositionSet() if sourceTrackDescriptor is not None else set()
|
|
||||||
targetDispositionSet = targetTrackDescriptor.getDispositionSet() if targetTrackDescriptor is not None else set()
|
|
||||||
|
|
||||||
# if targetTrackDescriptor.getIndex() == 3:
|
|
||||||
# raise click.ClickException(f"{sourceDispositionSet} {targetDispositionSet}")
|
|
||||||
|
|
||||||
dispositionDiffResult = setDiff(sourceDispositionSet, targetDispositionSet)
|
|
||||||
|
|
||||||
if dispositionDiffResult:
|
|
||||||
trackCompareResult[MediaDescriptorChangeSet.DISPOSITION_SET_KEY] = dispositionDiffResult
|
|
||||||
|
|
||||||
return trackCompareResult
|
|
||||||
|
|
||||||
def normalizeTrackTagValue(self, tagKey, tagValue):
|
|
||||||
if tagKey != "language":
|
|
||||||
return tagValue
|
|
||||||
|
|
||||||
if isinstance(tagValue, IsoLanguage):
|
|
||||||
return tagValue.threeLetter()
|
|
||||||
|
|
||||||
trackLanguage = IsoLanguage.findThreeLetter(str(tagValue))
|
|
||||||
if trackLanguage != IsoLanguage.UNDEFINED:
|
|
||||||
return trackLanguage.threeLetter()
|
|
||||||
|
|
||||||
return tagValue
|
|
||||||
|
|
||||||
def normalizeTrackTags(self, trackTags: dict):
|
|
||||||
return {
|
|
||||||
tagKey: self.normalizeTrackTagValue(tagKey, tagValue)
|
|
||||||
for tagKey, tagValue in trackTags.items()
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def generateDispositionTokens(self):
|
|
||||||
"""
|
|
||||||
#Example: -disposition:s:0 default -disposition:s:1 0
|
|
||||||
"""
|
|
||||||
dispositionTokens = []
|
|
||||||
|
|
||||||
# if MediaDescriptorChangeSet.TRACKS_KEY in self.__changeSetObj.keys():
|
|
||||||
#
|
|
||||||
# if DIFF_ADDED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
# addedTracks: dict = self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY]
|
|
||||||
# trackDescriptor: TrackDescriptor
|
|
||||||
# for trackDescriptor in addedTracks.values():
|
|
||||||
#
|
|
||||||
# dispositionSet = trackDescriptor.getDispositionSet()
|
|
||||||
#
|
|
||||||
# if dispositionSet:
|
|
||||||
# dispositionTokens += [f"-disposition:{trackDescriptor.getType().indicator()}:{trackDescriptor.getSubIndex()}",
|
|
||||||
# '+'.join([d.label() for d in dispositionSet])]
|
|
||||||
#
|
|
||||||
# if DIFF_CHANGED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
# changedTracks: dict = self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_CHANGED_KEY]
|
|
||||||
# trackDiffObj: dict
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# for trackIndex, trackDiffObj in changedTracks.items():
|
|
||||||
#
|
|
||||||
# if MediaDescriptorChangeSet.DISPOSITION_SET_KEY in trackDiffObj.keys():
|
|
||||||
#
|
|
||||||
# dispositionDiffObj: dict = trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY]
|
|
||||||
#
|
|
||||||
# addedDispositions = dispositionDiffObj[DIFF_ADDED_KEY] if DIFF_ADDED_KEY in dispositionDiffObj.keys() else set()
|
|
||||||
# removedDispositions = dispositionDiffObj[DIFF_REMOVED_KEY] if DIFF_REMOVED_KEY in dispositionDiffObj.keys() else set()
|
|
||||||
# unchangedDispositions = dispositionDiffObj[DIFF_UNCHANGED_KEY] if DIFF_UNCHANGED_KEY in dispositionDiffObj.keys() else set()
|
|
||||||
#
|
|
||||||
# targetDispositions = addedDispositions | unchangedDispositions
|
|
||||||
#
|
|
||||||
# trackDescriptor = self.__targetTrackDescriptors[trackIndex]
|
|
||||||
# streamIndicator = trackDescriptor.getType().indicator()
|
|
||||||
# subIndex = trackDescriptor.getSubIndex()
|
|
||||||
#
|
|
||||||
# if targetDispositions:
|
|
||||||
# dispositionTokens += [f"-disposition:{streamIndicator}:{subIndex}", '+'.join([d.label() for d in targetDispositions])]
|
|
||||||
# # if not targetDispositions and removedDispositions:
|
|
||||||
# else:
|
|
||||||
# dispositionTokens += [f"-disposition:{streamIndicator}:{subIndex}", '0']
|
|
||||||
for ttd in self.__targetTrackDescriptors:
|
|
||||||
|
|
||||||
targetDispositions = ttd.getDispositionSet()
|
|
||||||
streamIndicator = ttd.getType().indicator()
|
|
||||||
subIndex = ttd.getSubIndex()
|
|
||||||
|
|
||||||
if targetDispositions:
|
|
||||||
dispositionTokens += [f"-disposition:{streamIndicator}:{subIndex}", '+'.join([d.label() for d in targetDispositions])]
|
|
||||||
# if not targetDispositions and removedDispositions:
|
|
||||||
else:
|
|
||||||
dispositionTokens += [f"-disposition:{streamIndicator}:{subIndex}", '0']
|
|
||||||
|
|
||||||
return dispositionTokens
|
|
||||||
|
|
||||||
|
|
||||||
def generateMetadataTokens(self):
|
|
||||||
|
|
||||||
metadataTokens = []
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TAGS_KEY in self.__changeSetObj.keys():
|
|
||||||
|
|
||||||
addedMediaTags = (self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_ADDED_KEY]
|
|
||||||
if DIFF_ADDED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
removedMediaTags = (self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_REMOVED_KEY]
|
|
||||||
if DIFF_REMOVED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
changedMediaTags = (self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_CHANGED_KEY]
|
|
||||||
if DIFF_CHANGED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
|
|
||||||
outputMediaTags = addedMediaTags | changedMediaTags
|
|
||||||
|
|
||||||
if (not 'no_signature' in self.__context.keys()
|
|
||||||
or not self.__context['no_signature']):
|
|
||||||
outputMediaTags = outputMediaTags | self.__signatureTags
|
|
||||||
|
|
||||||
# outputMediaTags = {k:v for k,v in outputMediaTags.items() if k not in self.__removeGlobalKeys}
|
|
||||||
|
|
||||||
for tagKey, tagValue in outputMediaTags.items():
|
|
||||||
metadataTokens += [f"-metadata:g",
|
|
||||||
f"{tagKey}={tagValue}"]
|
|
||||||
|
|
||||||
for tagKey, tagValue in changedMediaTags.items():
|
|
||||||
metadataTokens += [f"-metadata:g",
|
|
||||||
f"{tagKey}={tagValue}"]
|
|
||||||
|
|
||||||
for removeKey in removedMediaTags.keys():
|
|
||||||
metadataTokens += [f"-metadata:g",
|
|
||||||
f"{removeKey}="]
|
|
||||||
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TRACKS_KEY in self.__changeSetObj.keys():
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
addedTracks: dict = self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY]
|
|
||||||
trackDescriptor: TrackDescriptor
|
|
||||||
for trackDescriptor in addedTracks.values():
|
|
||||||
for tagKey, tagValue in self.normalizeTrackTags(trackDescriptor.getTags()).items():
|
|
||||||
if not tagKey in self.__removeTrackKeys:
|
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
|
||||||
f"{tagKey}={tagValue}"]
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
changedTracks: dict = self.__changeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_CHANGED_KEY]
|
|
||||||
trackDiffObj: dict
|
|
||||||
for trackIndex, trackDiffObj in changedTracks.items():
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TAGS_KEY in trackDiffObj.keys():
|
|
||||||
|
|
||||||
tagsDiffObj = trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY]
|
|
||||||
|
|
||||||
addedTrackTags = tagsDiffObj[DIFF_ADDED_KEY] if DIFF_ADDED_KEY in tagsDiffObj.keys() else {}
|
|
||||||
changedTrackTags = tagsDiffObj[DIFF_CHANGED_KEY] if DIFF_CHANGED_KEY in tagsDiffObj.keys() else {}
|
|
||||||
unchangedTrackTags = tagsDiffObj[DIFF_UNCHANGED_KEY] if DIFF_UNCHANGED_KEY in tagsDiffObj.keys() else {}
|
|
||||||
removedTrackTags = tagsDiffObj[DIFF_REMOVED_KEY] if DIFF_REMOVED_KEY in tagsDiffObj.keys() else {}
|
|
||||||
|
|
||||||
outputTrackTags = addedTrackTags | changedTrackTags
|
|
||||||
|
|
||||||
trackDescriptor = self.__targetTrackDescriptorsByIndex[trackIndex]
|
|
||||||
|
|
||||||
for tagKey, tagValue in self.normalizeTrackTags(outputTrackTags).items():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
|
||||||
f"{tagKey}={tagValue}"]
|
|
||||||
|
|
||||||
if trackDescriptor.getExternalSourceFilePath():
|
|
||||||
# When a single-track external file substitutes the
|
|
||||||
# media payload, keep metadata from the regular
|
|
||||||
# source track unless the external/target side
|
|
||||||
# overrides it explicitly.
|
|
||||||
preservedTrackTags = (
|
|
||||||
{
|
|
||||||
tagKey: tagValue
|
|
||||||
for tagKey, tagValue in removedTrackTags.items()
|
|
||||||
if tagKey not in self.__removeTrackKeys
|
|
||||||
}
|
|
||||||
| unchangedTrackTags
|
|
||||||
)
|
|
||||||
for tagKey, tagValue in self.normalizeTrackTags(preservedTrackTags).items():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
|
||||||
f"{tagKey}={tagValue}"]
|
|
||||||
else:
|
|
||||||
for removeKey in removedTrackTags.keys():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackDescriptor.getType().indicator()}"
|
|
||||||
+ f":{trackDescriptor.getSubIndex()}",
|
|
||||||
f"{removeKey}="]
|
|
||||||
|
|
||||||
for tagKey, tagValue in self.__context.get('encoding_metadata_tags', {}).items():
|
|
||||||
metadataTokens += [f"-metadata:g", f"{tagKey}={tagValue}"]
|
|
||||||
|
|
||||||
metadataTokens += self.generateConfiguredRemovalMetadataTokens()
|
|
||||||
|
|
||||||
return metadataTokens
|
|
||||||
|
|
||||||
|
|
||||||
def getChangeSetObj(self):
|
|
||||||
return self.__changeSetObj
|
|
||||||
|
|
||||||
def generateConfiguredRemovalMetadataTokens(self):
|
|
||||||
metadataTokens = []
|
|
||||||
|
|
||||||
for removeKey in self.__removeGlobalKeys:
|
|
||||||
metadataTokens += ["-metadata:g", f"{removeKey}="]
|
|
||||||
|
|
||||||
for trackDescriptor in self.__targetTrackDescriptors:
|
|
||||||
for removeKey in self.__removeTrackKeys:
|
|
||||||
metadataTokens += [
|
|
||||||
f"-metadata:s:{trackDescriptor.getType().indicator()}:{trackDescriptor.getSubIndex()}",
|
|
||||||
f"{removeKey}=",
|
|
||||||
]
|
|
||||||
|
|
||||||
return metadataTokens
|
|
||||||
@@ -1,748 +0,0 @@
|
|||||||
import os, click, re
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, Input, DataTable
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from ffx.audio_layout import AudioLayout
|
|
||||||
|
|
||||||
from .show_details_screen import ShowDetailsScreen
|
|
||||||
from .pattern_details_screen import PatternDetailsScreen
|
|
||||||
from .screen_support import build_screen_bootstrap, build_screen_controllers
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
from ffx.model.track import Track
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
from textual.widgets._data_table import CellDoesNotExist
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.file_properties import FileProperties
|
|
||||||
|
|
||||||
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
|
|
||||||
|
|
||||||
from ffx.helper import formatRichColor, DIFF_ADDED_KEY, DIFF_CHANGED_KEY, DIFF_REMOVED_KEY, DIFF_UNCHANGED_KEY
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class MediaDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 5 8;
|
|
||||||
grid-rows: 8 2 2 2 2 8 2 2 8;
|
|
||||||
grid-columns: 15 25 90 10 105;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
min-height: 40;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
|
|
||||||
.triple {
|
|
||||||
row-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
|
|
||||||
.purple {
|
|
||||||
tint: purple 40%;
|
|
||||||
}
|
|
||||||
|
|
||||||
.yellow {
|
|
||||||
tint: yellow 40%;
|
|
||||||
}
|
|
||||||
|
|
||||||
#differences-table {
|
|
||||||
row-span: 8;
|
|
||||||
/* tint: magenta 40%; */
|
|
||||||
}
|
|
||||||
|
|
||||||
/* #pattern_input {
|
|
||||||
tint: red 40%;
|
|
||||||
}*/
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
TRACKS_TABLE_INDEX_COLUMN_LABEL = "Index"
|
|
||||||
TRACKS_TABLE_TYPE_COLUMN_LABEL = "Type"
|
|
||||||
TRACKS_TABLE_SUB_INDEX_COLUMN_LABEL = "SubIndex"
|
|
||||||
TRACKS_TABLE_CODEC_COLUMN_LABEL = "Codec"
|
|
||||||
TRACKS_TABLE_LAYOUT_COLUMN_LABEL = "Layout"
|
|
||||||
TRACKS_TABLE_LANGUAGE_COLUMN_LABEL = "Language"
|
|
||||||
TRACKS_TABLE_TITLE_COLUMN_LABEL = "Title"
|
|
||||||
TRACKS_TABLE_DEFAULT_COLUMN_LABEL = "Default"
|
|
||||||
TRACKS_TABLE_FORCED_COLUMN_LABEL = "Forced"
|
|
||||||
|
|
||||||
DIFFERENCES_TABLE_DIFFERENCES_COLUMN_LABEL = 'Differences (file->db/output)'
|
|
||||||
|
|
||||||
|
|
||||||
BINDINGS = [
|
|
||||||
("n", "new_pattern", "New Pattern"),
|
|
||||||
("u", "update_pattern", "Update Pattern"),
|
|
||||||
("e", "edit_pattern", "Edit Pattern"),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
bootstrap = build_screen_bootstrap(self.app.getContext())
|
|
||||||
self.context = bootstrap.context
|
|
||||||
|
|
||||||
self.__removeGlobalKeys = bootstrap.remove_global_keys
|
|
||||||
self.__ignoreGlobalKeys = bootstrap.ignore_global_keys
|
|
||||||
|
|
||||||
controllers = build_screen_controllers(
|
|
||||||
self.context,
|
|
||||||
pattern=True,
|
|
||||||
show=True,
|
|
||||||
track=True,
|
|
||||||
tag=True,
|
|
||||||
)
|
|
||||||
self.__pc = controllers['pattern']
|
|
||||||
self.__sc = controllers['show']
|
|
||||||
self.__tc = controllers['track']
|
|
||||||
self.__tac = controllers['tag']
|
|
||||||
|
|
||||||
if not 'command' in self.context.keys() or self.context['command'] != 'inspect':
|
|
||||||
raise click.ClickException(f"MediaDetailsScreen.__init__(): Can only perform command 'inspect'")
|
|
||||||
|
|
||||||
if not 'arguments' in self.context.keys() or not 'filename' in self.context['arguments'].keys() or not self.context['arguments']['filename']:
|
|
||||||
raise click.ClickException(f"MediaDetailsScreen.__init__(): Argument 'filename' is required to be provided for command 'inspect'")
|
|
||||||
|
|
||||||
self.__mediaFilename = self.context['arguments']['filename']
|
|
||||||
|
|
||||||
if not os.path.isfile(self.__mediaFilename):
|
|
||||||
raise click.ClickException(f"MediaDetailsScreen.__init__(): Media file {self.__mediaFilename} does not exist")
|
|
||||||
|
|
||||||
self.loadProperties()
|
|
||||||
|
|
||||||
|
|
||||||
def removeShow(self, showId : int = -1):
|
|
||||||
"""Remove show entry from DataTable.
|
|
||||||
Removes the <New show> entry if showId is not set"""
|
|
||||||
|
|
||||||
for rowKey, row in self.showsTable.rows.items(): # dict[RowKey, Row]
|
|
||||||
|
|
||||||
rowData = self.showsTable.get_row(rowKey)
|
|
||||||
|
|
||||||
try:
|
|
||||||
if (showId == -1 and rowData[0] == ' '
|
|
||||||
or showId == int(rowData[0])):
|
|
||||||
self.showsTable.remove_row(rowKey)
|
|
||||||
return
|
|
||||||
except:
|
|
||||||
continue
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def getRowIndexFromShowId(self, showId : int = -1) -> int:
|
|
||||||
"""Find the index of the row where the value in the specified column matches the target_value."""
|
|
||||||
|
|
||||||
for rowKey, row in self.showsTable.rows.items(): # dict[RowKey, Row]
|
|
||||||
|
|
||||||
rowData = self.showsTable.get_row(rowKey)
|
|
||||||
|
|
||||||
try:
|
|
||||||
if ((showId == -1 and rowData[0] == ' ')
|
|
||||||
or showId == int(rowData[0])):
|
|
||||||
return int(self.showsTable.get_row_index(rowKey))
|
|
||||||
except:
|
|
||||||
continue
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def loadProperties(self):
|
|
||||||
|
|
||||||
self.__mediaFileProperties = FileProperties(self.context, self.__mediaFilename)
|
|
||||||
self.__sourceMediaDescriptor = self.__mediaFileProperties.getMediaDescriptor()
|
|
||||||
|
|
||||||
#HINT: This is None if the filename did not match anything in database
|
|
||||||
self.__currentPattern = self.__mediaFileProperties.getPattern()
|
|
||||||
|
|
||||||
# keine tags vorhanden
|
|
||||||
self.__targetMediaDescriptor = self.__currentPattern.getMediaDescriptor(self.context) if self.__currentPattern is not None else None
|
|
||||||
|
|
||||||
# Enumerating differences between media descriptors
|
|
||||||
# from file (=current) vs from stored in database (=target)
|
|
||||||
try:
|
|
||||||
mdcs = MediaDescriptorChangeSet(self.context,
|
|
||||||
self.__targetMediaDescriptor,
|
|
||||||
self.__sourceMediaDescriptor)
|
|
||||||
|
|
||||||
self.__mediaChangeSetObj = mdcs.getChangeSetObj()
|
|
||||||
except ValueError:
|
|
||||||
self.__mediaChangeSetObj = {}
|
|
||||||
|
|
||||||
|
|
||||||
def updateDifferences(self):
|
|
||||||
|
|
||||||
self.loadProperties()
|
|
||||||
|
|
||||||
self.differencesTable.clear()
|
|
||||||
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TAGS_KEY in self.__mediaChangeSetObj.keys():
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for tagKey, tagValue in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_ADDED_KEY].items():
|
|
||||||
if tagKey not in self.__ignoreGlobalKeys:
|
|
||||||
row = (f"add media tag: key='{tagKey}' value='{tagValue}'",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for tagKey, tagValue in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_REMOVED_KEY].items():
|
|
||||||
if tagKey not in self.__ignoreGlobalKeys and tagKey not in self.__removeGlobalKeys:
|
|
||||||
row = (f"remove media tag: key='{tagKey}' value='{tagValue}'",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for tagKey, tagValue in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_CHANGED_KEY].items():
|
|
||||||
if tagKey not in self.__ignoreGlobalKeys:
|
|
||||||
row = (f"change media tag: key='{tagKey}' value='{tagValue}'",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TRACKS_KEY in self.__mediaChangeSetObj.keys():
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
|
|
||||||
trackDescriptor: TrackDescriptor
|
|
||||||
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY].items():
|
|
||||||
row = (f"add {trackDescriptor.getType().label()} track: index={trackDescriptor.getIndex()} lang={trackDescriptor.getLanguage().threeLetter()}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_REMOVED_KEY].items():
|
|
||||||
row = (f"remove stream #{trackIndex}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
|
|
||||||
changedTracks: dict = self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_CHANGED_KEY]
|
|
||||||
|
|
||||||
targetTrackDescriptors = self.__targetMediaDescriptor.getTrackDescriptors()
|
|
||||||
|
|
||||||
trackDiffObj: dict
|
|
||||||
for trackIndex, trackDiffObj in changedTracks.items():
|
|
||||||
|
|
||||||
ttd: TrackDescriptor = targetTrackDescriptors[trackIndex]
|
|
||||||
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TAGS_KEY in trackDiffObj.keys():
|
|
||||||
|
|
||||||
removedTags = (trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_REMOVED_KEY]
|
|
||||||
if DIFF_REMOVED_KEY in trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
for tagKey, tagValue in removedTags.items():
|
|
||||||
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) remove key={tagKey} value={tagValue}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
addedTags = (trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_ADDED_KEY]
|
|
||||||
if DIFF_ADDED_KEY in trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
for tagKey, tagValue in addedTags.items():
|
|
||||||
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) add key={tagKey} value={tagValue}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
changedTags = (trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_CHANGED_KEY]
|
|
||||||
if DIFF_CHANGED_KEY in trackDiffObj[MediaDescriptorChangeSet.TAGS_KEY].keys() else {})
|
|
||||||
for tagKey, tagValue in changedTags.items():
|
|
||||||
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) change key={tagKey} value={tagValue}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.DISPOSITION_SET_KEY in trackDiffObj.keys():
|
|
||||||
|
|
||||||
addedDispositions = (trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY][DIFF_ADDED_KEY]
|
|
||||||
if DIFF_ADDED_KEY in trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY].keys() else set())
|
|
||||||
for ad in addedDispositions:
|
|
||||||
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) add disposition={ad.label()}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
removedDispositions = (trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY][DIFF_REMOVED_KEY]
|
|
||||||
if DIFF_REMOVED_KEY in trackDiffObj[MediaDescriptorChangeSet.DISPOSITION_SET_KEY].keys() else set())
|
|
||||||
for rd in removedDispositions:
|
|
||||||
row = (f"change stream #{ttd.getIndex()} ({ttd.getType().label()}:{ttd.getSubIndex()}) remove disposition={rd.label()}",)
|
|
||||||
self.differencesTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
if self.__currentPattern is None:
|
|
||||||
row = (' ', '<New show>', ' ') # Convert each element to a string before adding
|
|
||||||
self.showsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
for show in self.__sc.getAllShows():
|
|
||||||
row = (int(show.id), show.name, show.year) # Convert each element to a string before adding
|
|
||||||
self.showsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
for mediaTagKey, mediaTagValue in self.__sourceMediaDescriptor.getTags().items():
|
|
||||||
|
|
||||||
textColor = None
|
|
||||||
if mediaTagKey in self.__ignoreGlobalKeys:
|
|
||||||
textColor = 'blue'
|
|
||||||
if mediaTagKey in self.__removeGlobalKeys:
|
|
||||||
textColor = 'red'
|
|
||||||
|
|
||||||
row = (formatRichColor(mediaTagKey, textColor), formatRichColor(mediaTagValue, textColor)) # Convert each element to a string before adding
|
|
||||||
self.mediaTagsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
|
|
||||||
if self.__currentPattern is not None:
|
|
||||||
|
|
||||||
showIdentifier = self.__currentPattern.getShowId()
|
|
||||||
showRowIndex = self.getRowIndexFromShowId(showIdentifier)
|
|
||||||
if showRowIndex is not None:
|
|
||||||
self.showsTable.move_cursor(row=showRowIndex)
|
|
||||||
|
|
||||||
self.query_one("#pattern_input", Input).value = self.__currentPattern.getPattern()
|
|
||||||
|
|
||||||
self.updateDifferences()
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
self.query_one("#pattern_input", Input).value = self.__mediaFilename
|
|
||||||
self.highlightPattern(True)
|
|
||||||
|
|
||||||
|
|
||||||
def highlightPattern(self, state : bool):
|
|
||||||
if state:
|
|
||||||
self.query_one("#pattern_input", Input).styles.background = 'red'
|
|
||||||
else:
|
|
||||||
self.query_one("#pattern_input", Input).styles.background = None
|
|
||||||
|
|
||||||
|
|
||||||
def updateTracks(self):
|
|
||||||
|
|
||||||
self.tracksTable.clear()
|
|
||||||
|
|
||||||
# trackDescriptorList = self.__sourceMediaDescriptor.getAllTrackDescriptors()
|
|
||||||
trackDescriptorList = self.__sourceMediaDescriptor.getTrackDescriptors()
|
|
||||||
|
|
||||||
typeCounter = {}
|
|
||||||
|
|
||||||
for td in trackDescriptorList:
|
|
||||||
|
|
||||||
trackType = td.getType()
|
|
||||||
if not trackType in typeCounter.keys():
|
|
||||||
typeCounter[trackType] = 0
|
|
||||||
|
|
||||||
dispoSet = td.getDispositionSet()
|
|
||||||
audioLayout = td.getAudioLayout()
|
|
||||||
row = (td.getIndex(),
|
|
||||||
trackType.label(),
|
|
||||||
typeCounter[trackType],
|
|
||||||
td.getCodec().label(),
|
|
||||||
audioLayout.label() if trackType == TrackType.AUDIO
|
|
||||||
and audioLayout != AudioLayout.LAYOUT_UNDEFINED else ' ',
|
|
||||||
td.getLanguage().label(),
|
|
||||||
td.getTitle(),
|
|
||||||
'Yes' if TrackDisposition.DEFAULT in dispoSet else 'No',
|
|
||||||
'Yes' if TrackDisposition.FORCED in dispoSet else 'No')
|
|
||||||
|
|
||||||
self.tracksTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
typeCounter[trackType] += 1
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
# Create the DataTable widget
|
|
||||||
self.showsTable = DataTable(classes="two")
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_show_id = self.showsTable.add_column("ID", width=10)
|
|
||||||
self.column_key_show_name = self.showsTable.add_column("Name", width=80)
|
|
||||||
self.column_key_show_year = self.showsTable.add_column("Year", width=10)
|
|
||||||
|
|
||||||
self.showsTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
self.mediaTagsTable = DataTable(classes="two")
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_track_tag_key = self.mediaTagsTable.add_column("Key", width=30)
|
|
||||||
self.column_key_track_tag_value = self.mediaTagsTable.add_column("Value", width=70)
|
|
||||||
|
|
||||||
self.mediaTagsTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
self.tracksTable = DataTable(classes="two")
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_track_index = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_INDEX_COLUMN_LABEL, width=5)
|
|
||||||
self.column_key_track_type = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_TYPE_COLUMN_LABEL, width=10)
|
|
||||||
self.column_key_track_sub_index = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_SUB_INDEX_COLUMN_LABEL, width=8)
|
|
||||||
self.column_key_track_codec = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_CODEC_COLUMN_LABEL, width=10)
|
|
||||||
self.column_key_track_layout = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_LAYOUT_COLUMN_LABEL, width=10)
|
|
||||||
self.column_key_track_language = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_LANGUAGE_COLUMN_LABEL, width=15)
|
|
||||||
self.column_key_track_title = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_TITLE_COLUMN_LABEL, width=48)
|
|
||||||
self.column_key_track_default = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_DEFAULT_COLUMN_LABEL, width=8)
|
|
||||||
self.column_key_track_forced = self.tracksTable.add_column(MediaDetailsScreen.TRACKS_TABLE_FORCED_COLUMN_LABEL, width=8)
|
|
||||||
|
|
||||||
self.tracksTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
# Create the DataTable widget
|
|
||||||
self.differencesTable = DataTable(id='differences-table') # classes="triple"
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_differences = self.differencesTable.add_column(MediaDetailsScreen.DIFFERENCES_TABLE_DIFFERENCES_COLUMN_LABEL, width=100)
|
|
||||||
|
|
||||||
self.differencesTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
# 1
|
|
||||||
yield Static("Show")
|
|
||||||
yield self.showsTable
|
|
||||||
yield Static(" ")
|
|
||||||
yield self.differencesTable
|
|
||||||
|
|
||||||
# 2
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
# 3
|
|
||||||
yield Static(" ")
|
|
||||||
yield Button("Substitute", id="pattern_button")
|
|
||||||
yield Static(" ", classes="two")
|
|
||||||
|
|
||||||
# 4
|
|
||||||
yield Static("Pattern")
|
|
||||||
yield Input(type="text", id='pattern_input', classes="two")
|
|
||||||
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
# 5
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
# 6
|
|
||||||
yield Static("Media Tags")
|
|
||||||
yield self.mediaTagsTable
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
# 7
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
# 8
|
|
||||||
yield Static(" ")
|
|
||||||
yield Button("Set Default", id="select_default_button")
|
|
||||||
yield Button("Set Forced", id="select_forced_button")
|
|
||||||
yield Static(" ")
|
|
||||||
# 9
|
|
||||||
yield Static("Streams")
|
|
||||||
yield self.tracksTable
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
def getPatternObjFromInput(self):
|
|
||||||
"""Returns show id and pattern as obj from corresponding inputs"""
|
|
||||||
patternObj = {}
|
|
||||||
try:
|
|
||||||
patternObj['show_id'] = self.getSelectedShowDescriptor().getId()
|
|
||||||
patternObj['pattern'] = str(self.query_one("#pattern_input", Input).value)
|
|
||||||
except:
|
|
||||||
return {}
|
|
||||||
return patternObj
|
|
||||||
|
|
||||||
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "pattern_button":
|
|
||||||
|
|
||||||
pattern = self.query_one("#pattern_input", Input).value
|
|
||||||
|
|
||||||
patternMatch = re.search(FileProperties.SE_INDICATOR_PATTERN, pattern)
|
|
||||||
|
|
||||||
if patternMatch:
|
|
||||||
self.query_one("#pattern_input", Input).value = pattern.replace(patternMatch.group(1), FileProperties.SE_INDICATOR_PATTERN)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "select_default_button":
|
|
||||||
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
|
|
||||||
self.__sourceMediaDescriptor.setDefaultSubTrack(selectedTrackDescriptor.getType(), selectedTrackDescriptor.getSubIndex())
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
if event.button.id == "select_forced_button":
|
|
||||||
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
|
|
||||||
self.__sourceMediaDescriptor.setForcedSubTrack(selectedTrackDescriptor.getType(), selectedTrackDescriptor.getSubIndex())
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedTrackDescriptor(self):
|
|
||||||
"""Returns a partial track descriptor"""
|
|
||||||
try:
|
|
||||||
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.tracksTable.coordinate_to_cell_key(self.tracksTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_track_data = self.tracksTable.get_row(row_key)
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
kwargs[TrackDescriptor.CONTEXT_KEY] = self.context
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = int(selected_track_data[0])
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.fromLabel(selected_track_data[1])
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = int(selected_track_data[2])
|
|
||||||
kwargs[TrackDescriptor.CODEC_KEY] = TrackCodec.fromLabel(selected_track_data[3])
|
|
||||||
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = AudioLayout.fromLabel(selected_track_data[4])
|
|
||||||
|
|
||||||
return TrackDescriptor(**kwargs)
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedShowDescriptor(self) -> ShowDescriptor:
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
row_key, col_key = self.showsTable.coordinate_to_cell_key(self.showsTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_row_data = self.showsTable.get_row(row_key)
|
|
||||||
|
|
||||||
try:
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
kwargs[ShowDescriptor.CONTEXT_KEY] = self.context
|
|
||||||
kwargs[ShowDescriptor.ID_KEY] = int(selected_row_data[0])
|
|
||||||
kwargs[ShowDescriptor.NAME_KEY] = str(selected_row_data[1])
|
|
||||||
kwargs[ShowDescriptor.YEAR_KEY] = int(selected_row_data[2])
|
|
||||||
|
|
||||||
return ShowDescriptor(**kwargs)
|
|
||||||
|
|
||||||
except ValueError:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def handle_new_pattern(self, showDescriptor: ShowDescriptor):
|
|
||||||
""""""
|
|
||||||
|
|
||||||
if type(showDescriptor) is not ShowDescriptor:
|
|
||||||
raise TypeError("MediaDetailsScreen.handle_new_pattern(): Argument 'showDescriptor' has to be of type ShowDescriptor")
|
|
||||||
|
|
||||||
self.removeShow()
|
|
||||||
|
|
||||||
showRowIndex = self.getRowIndexFromShowId(showDescriptor.getId())
|
|
||||||
if showRowIndex is None:
|
|
||||||
show = (showDescriptor.getId(), showDescriptor.getName(), showDescriptor.getYear())
|
|
||||||
self.showsTable.add_row(*map(str, show))
|
|
||||||
|
|
||||||
showRowIndex = self.getRowIndexFromShowId(showDescriptor.getId())
|
|
||||||
if showRowIndex is not None:
|
|
||||||
self.showsTable.move_cursor(row=showRowIndex)
|
|
||||||
|
|
||||||
patternObj = self.getPatternObjFromInput()
|
|
||||||
|
|
||||||
if patternObj:
|
|
||||||
mediaTags = {}
|
|
||||||
for tagKey, tagValue in self.__sourceMediaDescriptor.getTags().items():
|
|
||||||
|
|
||||||
# Filter tags that make no sense to preserve
|
|
||||||
if tagKey not in self.__ignoreGlobalKeys and not tagKey in self.__removeGlobalKeys:
|
|
||||||
mediaTags[tagKey] = tagValue
|
|
||||||
|
|
||||||
patternId = self.__pc.savePatternSchema(
|
|
||||||
patternObj,
|
|
||||||
trackDescriptors=self.__sourceMediaDescriptor.getTrackDescriptors(),
|
|
||||||
mediaTags=mediaTags,
|
|
||||||
)
|
|
||||||
if patternId:
|
|
||||||
self.highlightPattern(False)
|
|
||||||
|
|
||||||
|
|
||||||
def action_new_pattern(self):
|
|
||||||
"""Adding new patterns
|
|
||||||
|
|
||||||
If the corresponding show does not exists in DB it is added beforehand"""
|
|
||||||
|
|
||||||
selectedShowDescriptor = self.getSelectedShowDescriptor()
|
|
||||||
|
|
||||||
#HINT: Callback is invoked after this method has exited. As a workaround the callback is executed directly
|
|
||||||
# from here with a mock-up screen result containing the necessary part of keys to perform correctly.
|
|
||||||
if selectedShowDescriptor is None:
|
|
||||||
self.app.push_screen(ShowDetailsScreen(), self.handle_new_pattern)
|
|
||||||
else:
|
|
||||||
self.handle_new_pattern(selectedShowDescriptor)
|
|
||||||
|
|
||||||
|
|
||||||
def action_update_pattern(self):
|
|
||||||
"""Updating patterns
|
|
||||||
|
|
||||||
When updating the database the actions must reverse the difference (eq to diff db->file)"""
|
|
||||||
|
|
||||||
if self.__currentPattern is not None:
|
|
||||||
patternObj = self.getPatternObjFromInput()
|
|
||||||
if (patternObj
|
|
||||||
and self.__currentPattern.getPattern() != patternObj['pattern']):
|
|
||||||
return self.__pc.updatePattern(self.__currentPattern.getId(), patternObj)
|
|
||||||
|
|
||||||
self.loadProperties()
|
|
||||||
|
|
||||||
# __mediaChangeSetObj is file vs database
|
|
||||||
if MediaDescriptorChangeSet.TAGS_KEY in self.__mediaChangeSetObj.keys():
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for addedTagKey in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_ADDED_KEY].keys():
|
|
||||||
# click.ClickException(f"delete media tag patternId={self.__currentPattern.getId()} addedTagKey={addedTagKey}")
|
|
||||||
self.__tac.deleteMediaTagByKey(self.__currentPattern.getId(), addedTagKey)
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for removedTagKey in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_REMOVED_KEY].keys():
|
|
||||||
currentTags = self.__sourceMediaDescriptor.getTags()
|
|
||||||
# click.ClickException(f"delete media tag patternId={self.__currentPattern.getId()} removedTagKey={removedTagKey} currentTags={currentTags[removedTagKey]}")
|
|
||||||
self.__tac.updateMediaTag(self.__currentPattern.getId(), removedTagKey, currentTags[removedTagKey])
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY].keys():
|
|
||||||
for changedTagKey in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TAGS_KEY][DIFF_CHANGED_KEY].keys():
|
|
||||||
currentTags = self.__sourceMediaDescriptor.getTags()
|
|
||||||
# click.ClickException(f"delete media tag patternId={self.__currentPattern.getId()} changedTagKey={changedTagKey} currentTags={currentTags[changedTagKey]}")
|
|
||||||
self.__tac.updateMediaTag(self.__currentPattern.getId(), changedTagKey, currentTags[changedTagKey])
|
|
||||||
|
|
||||||
if MediaDescriptorChangeSet.TRACKS_KEY in self.__mediaChangeSetObj.keys():
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
|
|
||||||
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_ADDED_KEY].items():
|
|
||||||
#targetTracks = [t for t in self.__targetMediaDescriptor.getAllTrackDescriptors() if t.getIndex() == addedTrackIndex]
|
|
||||||
# if targetTracks:
|
|
||||||
# self.__tc.deleteTrack(targetTracks[0].getId()) # id
|
|
||||||
# self.__tc.deleteTrack(targetTracks[0].getId())
|
|
||||||
self.__tc.addTrack(trackDescriptor, patternId = self.__currentPattern.getId())
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
trackDescriptor: TrackDescriptor
|
|
||||||
for trackIndex, trackDescriptor in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_REMOVED_KEY].items():
|
|
||||||
# Track per inspect/update hinzufügen
|
|
||||||
#self.__tc.addTrack(removedTrack, patternId = self.__currentPattern.getId())
|
|
||||||
self.__tc.deleteTrack(trackDescriptor.getId())
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY].keys():
|
|
||||||
|
|
||||||
# [vsTracks[tp].getIndex()] = trackDiff
|
|
||||||
for trackIndex, trackDiff in self.__mediaChangeSetObj[MediaDescriptorChangeSet.TRACKS_KEY][DIFF_CHANGED_KEY].items():
|
|
||||||
|
|
||||||
targetTracks = [t for t in self.__targetMediaDescriptor.getTrackDescriptors() if t.getIndex() == trackIndex]
|
|
||||||
targetTrackId = targetTracks[0].getId() if targetTracks else None
|
|
||||||
targetTrackIndex = targetTracks[0].getIndex() if targetTracks else None
|
|
||||||
|
|
||||||
changedCurrentTracks = [t for t in self.__sourceMediaDescriptor.getTrackDescriptors() if t.getIndex() == trackIndex]
|
|
||||||
# changedCurrentTrackId #HINT: Undefined as track descriptors do not come from file with track_id
|
|
||||||
|
|
||||||
if TrackDescriptor.TAGS_KEY in trackDiff.keys():
|
|
||||||
tagsDiff = trackDiff[TrackDescriptor.TAGS_KEY]
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in tagsDiff.keys():
|
|
||||||
for tagKey, tagValue in tagsDiff[DIFF_ADDED_KEY].items():
|
|
||||||
|
|
||||||
# if targetTracks:
|
|
||||||
# self.__tac.deleteTrackTagByKey(targetTrackId, addedTrackTagKey)
|
|
||||||
self.__tac.updateTrackTag(targetTrackId, tagKey, tagValue)
|
|
||||||
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in tagsDiff.keys():
|
|
||||||
for tagKey, tagValue in tagsDiff[DIFF_REMOVED_KEY].items():
|
|
||||||
# if changedCurrentTracks:
|
|
||||||
# self.__tac.updateTrackTag(targetTrackId, removedTrackTagKey, changedCurrentTracks[0].getTags()[removedTrackTagKey])
|
|
||||||
self.__tac.deleteTrackTagByKey(targetTrackId, tagKey)
|
|
||||||
|
|
||||||
if DIFF_CHANGED_KEY in tagsDiff.keys():
|
|
||||||
for tagKey, tagValue in tagsDiff[DIFF_CHANGED_KEY].items():
|
|
||||||
# if changedCurrentTracks:
|
|
||||||
# self.__tac.updateTrackTag(targetTrackId, changedTrackTagKey, changedCurrentTracks[0].getTags()[changedTrackTagKey])
|
|
||||||
self.__tac.updateTrackTag(targetTrackId, tagKey, tagValue)
|
|
||||||
|
|
||||||
|
|
||||||
if TrackDescriptor.DISPOSITION_SET_KEY in trackDiff.keys():
|
|
||||||
changedTrackDispositionDiff = trackDiff[TrackDescriptor.DISPOSITION_SET_KEY]
|
|
||||||
|
|
||||||
if DIFF_ADDED_KEY in changedTrackDispositionDiff.keys():
|
|
||||||
for changedDisposition in changedTrackDispositionDiff[DIFF_ADDED_KEY]:
|
|
||||||
if targetTrackIndex is not None:
|
|
||||||
self.__tc.setDispositionState(self.__currentPattern.getId(), targetTrackIndex, changedDisposition, True)
|
|
||||||
|
|
||||||
if DIFF_REMOVED_KEY in changedTrackDispositionDiff.keys():
|
|
||||||
for changedDisposition in changedTrackDispositionDiff[DIFF_REMOVED_KEY]:
|
|
||||||
if targetTrackIndex is not None:
|
|
||||||
self.__tc.setDispositionState(self.__currentPattern.getId(), targetTrackIndex, changedDisposition, False)
|
|
||||||
|
|
||||||
|
|
||||||
self.updateDifferences()
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def action_edit_pattern(self):
|
|
||||||
|
|
||||||
patternObj = self.getPatternObjFromInput()
|
|
||||||
|
|
||||||
if patternObj['pattern']:
|
|
||||||
|
|
||||||
selectedPatternId = self.__pc.findPattern(patternObj)
|
|
||||||
|
|
||||||
if selectedPatternId is None:
|
|
||||||
raise click.ClickException(f"MediaDetailsScreen.action_edit_pattern(): Pattern to edit has no id")
|
|
||||||
|
|
||||||
self.app.push_screen(PatternDetailsScreen(patternId = selectedPatternId, showId = self.getSelectedShowDescriptor().getId()), self.handle_edit_pattern) # <-
|
|
||||||
|
|
||||||
|
|
||||||
def handle_edit_pattern(self, screenResult):
|
|
||||||
self.query_one("#pattern_input", Input).value = screenResult['pattern']
|
|
||||||
self.updateDifferences()
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
"""Load ORM model modules so SQLAlchemy relationship strings can resolve."""
|
|
||||||
|
|
||||||
from .show import Base, Show
|
|
||||||
from .pattern import Pattern
|
|
||||||
from .track import Track
|
|
||||||
from .track_tag import TrackTag
|
|
||||||
from .media_tag import MediaTag
|
|
||||||
from .shifted_season import ShiftedSeason
|
|
||||||
from .property import Property
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
'Base',
|
|
||||||
'Show',
|
|
||||||
'Pattern',
|
|
||||||
'Track',
|
|
||||||
'TrackTag',
|
|
||||||
'MediaTag',
|
|
||||||
'ShiftedSeason',
|
|
||||||
'Property',
|
|
||||||
]
|
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
import os, sys, importlib, inspect, glob, re
|
|
||||||
|
|
||||||
from ffx.configuration_controller import ConfigurationController
|
|
||||||
from ffx.database import databaseContext
|
|
||||||
|
|
||||||
from sqlalchemy import Engine
|
|
||||||
from sqlalchemy.orm import sessionmaker
|
|
||||||
|
|
||||||
|
|
||||||
class Conversion():
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
|
|
||||||
self._context = {}
|
|
||||||
self._context['config'] = ConfigurationController()
|
|
||||||
|
|
||||||
self._context['database'] = databaseContext(databasePath=self._context['config'].getDatabaseFilePath())
|
|
||||||
|
|
||||||
self.__databaseSession: sessionmaker = self._context['database']['session']
|
|
||||||
self.__databaseEngine: Engine = self._context['database']['engine']
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
|
|
||||||
filenamePattern = re.compile("conversion_([0-9]+)_([0-9]+)\\.py")
|
|
||||||
|
|
||||||
filenameList = [os.path.basename(fp) for fp in glob.glob(f"{ basePath }/*.py") if fp != __file__]
|
|
||||||
|
|
||||||
versionTupleList = [(fm.group(1), fm.group(2)) for fn in filenameList if (fm := filenamePattern.search(fn))]
|
|
||||||
|
|
||||||
return versionTupleList
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(versionFrom, versionTo):
|
|
||||||
importlib.import_module(f"ffx.model.conversions.conversion_{ versionFrom }_{ versionTo }")
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[f"ffx.model.conversions.conversion_{ versionFrom }_{ versionTo }"]):
|
|
||||||
#HINT: Excluding DispositionCombination as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'Conversion' and name.startswith('Conversion'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [Conversion.getClassReference(verFrom, verTo) for verFrom, verTo in Conversion.list()]
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
import os, sys, importlib, inspect, glob, re
|
|
||||||
|
|
||||||
from .conversion import Conversion
|
|
||||||
|
|
||||||
|
|
||||||
class Conversion_2_3(Conversion):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
def applyConversion(self):
|
|
||||||
|
|
||||||
s = self.__databaseSession()
|
|
||||||
e = self.__databaseEngine
|
|
||||||
|
|
||||||
with e.connect() as c:
|
|
||||||
c.execute("ALTER TABLE user ADD COLUMN email VARCHAR(255)")
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
import os, sys, importlib, inspect, glob, re
|
|
||||||
|
|
||||||
from .conversion import Conversion
|
|
||||||
|
|
||||||
|
|
||||||
class Conversion_3_4(Conversion):
|
|
||||||
pass
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
# from typing import List
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey, Enum
|
|
||||||
from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
|
||||||
|
|
||||||
from .show import Base
|
|
||||||
|
|
||||||
|
|
||||||
class MediaTag(Base):
|
|
||||||
"""
|
|
||||||
relationship(argument, opt1, opt2, ...)
|
|
||||||
argument is string of class or Mapped class of the target entity
|
|
||||||
backref creates a bi-directional corresponding relationship (back_populates preferred)
|
|
||||||
back_populates points to the corresponding relationship (the actual class attribute identifier)
|
|
||||||
|
|
||||||
See: https://docs.sqlalchemy.org/en/(14|20)/orm/basic_relationships.html
|
|
||||||
"""
|
|
||||||
|
|
||||||
__tablename__ = 'media_tags'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
|
|
||||||
key = Column(String)
|
|
||||||
value = Column(String)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
pattern_id = Column(Integer, ForeignKey('patterns.id', ondelete="CASCADE"))
|
|
||||||
pattern = relationship('Pattern', back_populates='media_tags')
|
|
||||||
@@ -1,83 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from sqlalchemy import Column, Integer, String, Text, ForeignKey, UniqueConstraint
|
|
||||||
from sqlalchemy.orm import relationship
|
|
||||||
|
|
||||||
from .show import Base, Show
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
class Pattern(Base):
|
|
||||||
|
|
||||||
__tablename__ = 'patterns'
|
|
||||||
__table_args__ = (
|
|
||||||
UniqueConstraint('show_id', 'pattern', name='uq_patterns_show_id_pattern'),
|
|
||||||
)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
pattern = Column(String)
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# id: Mapped[int] = mapped_column(Integer, primary_key=True)
|
|
||||||
# pattern: Mapped[str] = mapped_column(String, nullable=False)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
show_id = Column(Integer, ForeignKey('shows.id', ondelete="CASCADE"))
|
|
||||||
show = relationship(Show, back_populates='patterns', lazy='joined')
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# show_id: Mapped[int] = mapped_column(ForeignKey("shows.id", ondelete="CASCADE"))
|
|
||||||
# show: Mapped["Show"] = relationship(back_populates="patterns")
|
|
||||||
|
|
||||||
tracks = relationship('Track', back_populates='pattern', cascade="all, delete", lazy='joined')
|
|
||||||
|
|
||||||
media_tags = relationship('MediaTag', back_populates='pattern', cascade="all, delete", lazy='joined')
|
|
||||||
|
|
||||||
quality = Column(Integer, default=0)
|
|
||||||
|
|
||||||
notes = Column(Text, default='')
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return int(self.id)
|
|
||||||
|
|
||||||
def getShowId(self):
|
|
||||||
return int(self.show_id)
|
|
||||||
|
|
||||||
def getShowDescriptor(self, context) -> ShowDescriptor:
|
|
||||||
# click.echo(f"self.show {self.show} id={self.show_id}")
|
|
||||||
return self.show.getDescriptor(context)
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return int(self.id)
|
|
||||||
|
|
||||||
def getPattern(self):
|
|
||||||
return str(self.pattern)
|
|
||||||
|
|
||||||
def getTags(self):
|
|
||||||
return {str(t.key):str(t.value) for t in self.media_tags}
|
|
||||||
|
|
||||||
|
|
||||||
def getMediaDescriptor(self, context):
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
kwargs[MediaDescriptor.CONTEXT_KEY] = context
|
|
||||||
|
|
||||||
kwargs[MediaDescriptor.TAGS_KEY] = self.getTags()
|
|
||||||
kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY] = []
|
|
||||||
|
|
||||||
# Set ordered subindices
|
|
||||||
subIndexCounter = {}
|
|
||||||
for track in self.tracks:
|
|
||||||
trackType = track.getType()
|
|
||||||
if not trackType in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY].append(track.getDescriptor(context, subIndex = subIndexCounter[trackType]))
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
return MediaDescriptor(**kwargs)
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
# from typing import List
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey, Enum
|
|
||||||
from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
|
||||||
|
|
||||||
from .show import Base
|
|
||||||
|
|
||||||
|
|
||||||
class Property(Base):
|
|
||||||
|
|
||||||
__tablename__ = 'properties'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
|
|
||||||
key = Column(String)
|
|
||||||
value = Column(String)
|
|
||||||
@@ -1,71 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from sqlalchemy import Column, Integer, ForeignKey
|
|
||||||
from sqlalchemy.orm import relationship
|
|
||||||
|
|
||||||
from .show import Base, Show
|
|
||||||
|
|
||||||
|
|
||||||
class ShiftedSeason(Base):
|
|
||||||
|
|
||||||
__tablename__ = 'shifted_seasons'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# id: Mapped[int] = mapped_column(Integer, primary_key=True)
|
|
||||||
# pattern: Mapped[str] = mapped_column(String, nullable=False)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
show_id = Column(Integer, ForeignKey('shows.id', ondelete="CASCADE"))
|
|
||||||
show = relationship(Show, back_populates='shifted_seasons', lazy='joined')
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# show_id: Mapped[int] = mapped_column(ForeignKey("shows.id", ondelete="CASCADE"))
|
|
||||||
# show: Mapped["Show"] = relationship(back_populates="patterns")
|
|
||||||
|
|
||||||
|
|
||||||
original_season = Column(Integer)
|
|
||||||
|
|
||||||
first_episode = Column(Integer, default = -1)
|
|
||||||
last_episode = Column(Integer, default = -1)
|
|
||||||
|
|
||||||
season_offset = Column(Integer, default = 0)
|
|
||||||
episode_offset = Column(Integer, default = 0)
|
|
||||||
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return self.id
|
|
||||||
|
|
||||||
|
|
||||||
def getOriginalSeason(self):
|
|
||||||
return self.original_season
|
|
||||||
|
|
||||||
def getFirstEpisode(self):
|
|
||||||
return self.first_episode
|
|
||||||
|
|
||||||
def getLastEpisode(self):
|
|
||||||
return self.last_episode
|
|
||||||
|
|
||||||
|
|
||||||
def getSeasonOffset(self):
|
|
||||||
return self.season_offset
|
|
||||||
|
|
||||||
def getEpisodeOffset(self):
|
|
||||||
return self.episode_offset
|
|
||||||
|
|
||||||
|
|
||||||
def getObj(self):
|
|
||||||
|
|
||||||
shiftedSeasonObj = {}
|
|
||||||
|
|
||||||
shiftedSeasonObj['original_season'] = self.getOriginalSeason()
|
|
||||||
shiftedSeasonObj['first_episode'] = self.getFirstEpisode()
|
|
||||||
shiftedSeasonObj['last_episode'] = self.getLastEpisode()
|
|
||||||
shiftedSeasonObj['season_offset'] = self.getSeasonOffset()
|
|
||||||
shiftedSeasonObj['episode_offset'] = self.getEpisodeOffset()
|
|
||||||
|
|
||||||
return shiftedSeasonObj
|
|
||||||
|
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
# from typing import List
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey
|
|
||||||
from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
|
||||||
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
Base = declarative_base()
|
|
||||||
|
|
||||||
|
|
||||||
class Show(Base):
|
|
||||||
"""
|
|
||||||
relationship(argument, opt1, opt2, ...)
|
|
||||||
argument is string of class or Mapped class of the target entity
|
|
||||||
backref creates a bi-directional corresponding relationship (back_populates preferred)
|
|
||||||
back_populates points to the corresponding relationship (the actual class attribute identifier)
|
|
||||||
|
|
||||||
See: https://docs.sqlalchemy.org/en/(14|20)/orm/basic_relationships.html
|
|
||||||
"""
|
|
||||||
|
|
||||||
__tablename__ = 'shows'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
|
|
||||||
name = Column(String)
|
|
||||||
year = Column(Integer)
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# id: Mapped[int] = mapped_column(Integer, primary_key=True)
|
|
||||||
# name: Mapped[str] = mapped_column(String, nullable=False)
|
|
||||||
# year: Mapped[int] = mapped_column(Integer, nullable=False)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
#patterns = relationship('Pattern', back_populates='show', cascade="all, delete", passive_deletes=True)
|
|
||||||
patterns = relationship('Pattern', back_populates='show', cascade="all, delete")
|
|
||||||
# patterns = relationship('Pattern', back_populates='show', cascade="all")
|
|
||||||
|
|
||||||
# v2.0
|
|
||||||
# patterns: Mapped[List["Pattern"]] = relationship(back_populates="show", cascade="all, delete")
|
|
||||||
|
|
||||||
shifted_seasons = relationship('ShiftedSeason', back_populates='show', cascade="all, delete")
|
|
||||||
|
|
||||||
|
|
||||||
index_season_digits = Column(Integer, default=ShowDescriptor.DEFAULT_INDEX_SEASON_DIGITS)
|
|
||||||
index_episode_digits = Column(Integer, default=ShowDescriptor.DEFAULT_INDEX_EPISODE_DIGITS)
|
|
||||||
indicator_season_digits = Column(Integer, default=ShowDescriptor.DEFAULT_INDICATOR_SEASON_DIGITS)
|
|
||||||
indicator_episode_digits = Column(Integer, default=ShowDescriptor.DEFAULT_INDICATOR_EPISODE_DIGITS)
|
|
||||||
|
|
||||||
|
|
||||||
def getDescriptor(self, context):
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
kwargs[ShowDescriptor.CONTEXT_KEY] = context
|
|
||||||
kwargs[ShowDescriptor.ID_KEY] = int(self.id)
|
|
||||||
kwargs[ShowDescriptor.NAME_KEY] = str(self.name)
|
|
||||||
kwargs[ShowDescriptor.YEAR_KEY] = int(self.year)
|
|
||||||
kwargs[ShowDescriptor.INDEX_SEASON_DIGITS_KEY] = int(self.index_season_digits)
|
|
||||||
kwargs[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY] = int(self.index_episode_digits)
|
|
||||||
kwargs[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY] = int(self.indicator_season_digits)
|
|
||||||
kwargs[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY] = int(self.indicator_episode_digits)
|
|
||||||
|
|
||||||
return ShowDescriptor(**kwargs)
|
|
||||||
@@ -1,216 +0,0 @@
|
|||||||
# from typing import List
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey
|
|
||||||
from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
|
||||||
|
|
||||||
from .show import Base
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.iso_language import IsoLanguage
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
from ffx.audio_layout import AudioLayout
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
|
|
||||||
|
|
||||||
class Track(Base):
|
|
||||||
"""
|
|
||||||
relationship(argument, opt1, opt2, ...)
|
|
||||||
argument is string of class or Mapped class of the target entity
|
|
||||||
backref creates a bi-directional corresponding relationship (back_populates preferred)
|
|
||||||
back_populates points to the corresponding relationship (the actual class attribute identifier)
|
|
||||||
|
|
||||||
See: https://docs.sqlalchemy.org/en/(14|20)/orm/basic_relationships.html
|
|
||||||
"""
|
|
||||||
|
|
||||||
__tablename__ = 'tracks'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True, autoincrement = True)
|
|
||||||
|
|
||||||
# P=pattern_id+sub_index+track_type
|
|
||||||
track_type = Column(Integer) # TrackType
|
|
||||||
|
|
||||||
index = Column(Integer)
|
|
||||||
source_index = Column(Integer)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
pattern_id = Column(Integer, ForeignKey('patterns.id', ondelete="CASCADE"))
|
|
||||||
pattern = relationship('Pattern', back_populates='tracks')
|
|
||||||
|
|
||||||
track_tags = relationship('TrackTag', back_populates='track', cascade="all, delete", lazy="joined")
|
|
||||||
|
|
||||||
disposition_flags = Column(Integer)
|
|
||||||
|
|
||||||
codec_name = Column(String)
|
|
||||||
audio_layout = Column(Integer)
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
trackType = kwargs.pop('track_type', None)
|
|
||||||
if trackType is not None:
|
|
||||||
self.track_type = int(trackType)
|
|
||||||
|
|
||||||
dispositionSet = kwargs.pop(TrackDescriptor.DISPOSITION_SET_KEY, set())
|
|
||||||
self.disposition_flags = int(TrackDisposition.toFlags(dispositionSet))
|
|
||||||
|
|
||||||
super().__init__(**kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def fromFfprobeStreamObj(cls, streamObj, patternId):
|
|
||||||
"""{
|
|
||||||
'index': 4,
|
|
||||||
'codec_name': 'hdmv_pgs_subtitle',
|
|
||||||
'codec_long_name': 'HDMV Presentation Graphic Stream subtitles',
|
|
||||||
'codec_type': 'subtitle',
|
|
||||||
'codec_tag_string': '[0][0][0][0]',
|
|
||||||
'codec_tag': '0x0000',
|
|
||||||
'r_frame_rate': '0/0',
|
|
||||||
'avg_frame_rate': '0/0',
|
|
||||||
'time_base': '1/1000',
|
|
||||||
'start_pts': 0,
|
|
||||||
'start_time': '0.000000',
|
|
||||||
'duration_ts': 1421035,
|
|
||||||
'duration': '1421.035000',
|
|
||||||
'disposition': {
|
|
||||||
'default': 1,
|
|
||||||
'dub': 0,
|
|
||||||
'original': 0,
|
|
||||||
'comment': 0,
|
|
||||||
'lyrics': 0,
|
|
||||||
'karaoke': 0,
|
|
||||||
'forced': 0,
|
|
||||||
'hearing_impaired': 0,
|
|
||||||
'visual_impaired': 0,
|
|
||||||
'clean_effects': 0,
|
|
||||||
'attached_pic': 0,
|
|
||||||
'timed_thumbnails': 0,
|
|
||||||
'non_diegetic': 0,
|
|
||||||
'captions': 0,
|
|
||||||
'descriptions': 0,
|
|
||||||
'metadata': 0,
|
|
||||||
'dependent': 0,
|
|
||||||
'still_image': 0
|
|
||||||
},
|
|
||||||
'tags': {
|
|
||||||
'language': 'ger',
|
|
||||||
'title': 'German Full'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True, autoincrement = True)
|
|
||||||
|
|
||||||
# P=pattern_id+sub_index+track_type
|
|
||||||
track_type = Column(Integer) # TrackType
|
|
||||||
sub_index = Column(Integer)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
pattern_id = Column(Integer, ForeignKey('patterns.id', ondelete='CASCADE'))
|
|
||||||
pattern = relationship('Pattern', back_populates='tracks')
|
|
||||||
|
|
||||||
|
|
||||||
language = Column(String) # IsoLanguage threeLetter
|
|
||||||
title = Column(String)
|
|
||||||
|
|
||||||
|
|
||||||
track_tags = relationship('TrackTag', back_populates='track', cascade='all, delete')
|
|
||||||
|
|
||||||
|
|
||||||
disposition_flags = Column(Integer)
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
trackType = streamObj[TrackDescriptor.FFPROBE_CODEC_TYPE_KEY]
|
|
||||||
|
|
||||||
if trackType in [t.label() for t in TrackType]:
|
|
||||||
|
|
||||||
return cls(pattern_id = patternId,
|
|
||||||
track_type = trackType,
|
|
||||||
codec_name = streamObj[TrackDescriptor.FFPROBE_CODEC_NAME_KEY],
|
|
||||||
disposition_flags = sum([2**t.index() for (k,v) in streamObj[TrackDescriptor.FFPROBE_DISPOSITION_KEY].items()
|
|
||||||
if v and (t := TrackDisposition.find(k)) is not None]),
|
|
||||||
audio_layout = AudioLayout.identify(streamObj))
|
|
||||||
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return int(self.id)
|
|
||||||
|
|
||||||
def getPatternId(self):
|
|
||||||
return int(self.pattern_id)
|
|
||||||
|
|
||||||
def getType(self):
|
|
||||||
return TrackType.fromIndex(self.track_type)
|
|
||||||
|
|
||||||
def getCodec(self) -> TrackCodec:
|
|
||||||
return TrackCodec.identify(self.codec_name)
|
|
||||||
|
|
||||||
def getIndex(self):
|
|
||||||
return int(self.index) if self.index is not None else -1
|
|
||||||
|
|
||||||
def getSourceIndex(self):
|
|
||||||
return int(self.source_index) if self.source_index is not None else -1
|
|
||||||
|
|
||||||
def getLanguage(self):
|
|
||||||
tags = {t.key:t.value for t in self.track_tags}
|
|
||||||
return IsoLanguage.findThreeLetter(tags['language']) if 'language' in tags.keys() else IsoLanguage.UNDEFINED
|
|
||||||
|
|
||||||
def getTitle(self):
|
|
||||||
tags = {t.key:t.value for t in self.track_tags}
|
|
||||||
return tags['title'] if 'title' in tags.keys() else ''
|
|
||||||
|
|
||||||
def getDispositionSet(self):
|
|
||||||
return TrackDisposition.toSet(self.disposition_flags)
|
|
||||||
|
|
||||||
def getAudioLayout(self):
|
|
||||||
return AudioLayout.fromIndex(self.audio_layout)
|
|
||||||
|
|
||||||
def getTags(self):
|
|
||||||
return {str(t.key):str(t.value) for t in self.track_tags}
|
|
||||||
|
|
||||||
|
|
||||||
def setDisposition(self, disposition : TrackDisposition):
|
|
||||||
self.disposition_flags = self.disposition_flags | int(2**disposition.index())
|
|
||||||
|
|
||||||
def resetDisposition(self, disposition : TrackDisposition):
|
|
||||||
self.disposition_flags = self.disposition_flags & sum([2**d.index() for d in TrackDisposition if d != disposition])
|
|
||||||
|
|
||||||
def getDisposition(self, disposition : TrackDisposition):
|
|
||||||
return bool(self.disposition_flags & 2**disposition.index())
|
|
||||||
|
|
||||||
|
|
||||||
def getDescriptor(self, context = None, subIndex : int = -1) -> TrackDescriptor:
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
if not context is None:
|
|
||||||
kwargs[TrackDescriptor.CONTEXT_KEY] = context
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.ID_KEY] = self.getId()
|
|
||||||
kwargs[TrackDescriptor.PATTERN_ID_KEY] = self.getPatternId()
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = self.getIndex()
|
|
||||||
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = self.getSourceIndex()
|
|
||||||
|
|
||||||
if subIndex > -1:
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = subIndex
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = self.getType()
|
|
||||||
kwargs[TrackDescriptor.CODEC_KEY] = self.getCodec()
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = self.getDispositionSet()
|
|
||||||
kwargs[TrackDescriptor.TAGS_KEY] = self.getTags()
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = self.getAudioLayout()
|
|
||||||
|
|
||||||
return TrackDescriptor(**kwargs)
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
# from typing import List
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey, Enum
|
|
||||||
from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
|
||||||
|
|
||||||
from .show import Base
|
|
||||||
|
|
||||||
|
|
||||||
class TrackTag(Base):
|
|
||||||
"""
|
|
||||||
relationship(argument, opt1, opt2, ...)
|
|
||||||
argument is string of class or Mapped class of the target entity
|
|
||||||
backref creates a bi-directional corresponding relationship (back_populates preferred)
|
|
||||||
back_populates points to the corresponding relationship (the actual class attribute identifier)
|
|
||||||
|
|
||||||
See: https://docs.sqlalchemy.org/en/(14|20)/orm/basic_relationships.html
|
|
||||||
"""
|
|
||||||
|
|
||||||
__tablename__ = 'track_tags'
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
id = Column(Integer, primary_key=True)
|
|
||||||
|
|
||||||
key = Column(String)
|
|
||||||
value = Column(String)
|
|
||||||
|
|
||||||
# v1.x
|
|
||||||
track_id = Column(Integer, ForeignKey('tracks.id', ondelete="CASCADE"))
|
|
||||||
track = relationship('Track', back_populates='track_tags')
|
|
||||||
@@ -1,411 +0,0 @@
|
|||||||
import re
|
|
||||||
|
|
||||||
import click
|
|
||||||
|
|
||||||
from ffx.model.media_tag import MediaTag
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
from ffx.model.track import Track
|
|
||||||
from ffx.model.track_tag import TrackTag
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
|
|
||||||
|
|
||||||
class DuplicatePatternMatchError(click.ClickException):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class InvalidPatternSchemaError(click.ClickException):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class PatternController:
|
|
||||||
_compiled_regex_cache: dict[str, re.Pattern] = {}
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context["database"]["session"]
|
|
||||||
|
|
||||||
self.__configurationData = self.context["config"].getData()
|
|
||||||
|
|
||||||
metadataConfiguration = (
|
|
||||||
self.__configurationData["metadata"]
|
|
||||||
if "metadata" in self.__configurationData.keys()
|
|
||||||
else {}
|
|
||||||
)
|
|
||||||
|
|
||||||
self.__removeTrackKeys = (
|
|
||||||
metadataConfiguration["streams"]["remove"]
|
|
||||||
if "streams" in metadataConfiguration.keys()
|
|
||||||
and "remove" in metadataConfiguration["streams"].keys()
|
|
||||||
else []
|
|
||||||
)
|
|
||||||
self.__ignoreTrackKeys = (
|
|
||||||
metadataConfiguration["streams"]["ignore"]
|
|
||||||
if "streams" in metadataConfiguration.keys()
|
|
||||||
and "ignore" in metadataConfiguration["streams"].keys()
|
|
||||||
else []
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def _clear_regex_cache(cls):
|
|
||||||
cls._compiled_regex_cache.clear()
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def _compile_pattern_expression(cls, pattern_id: int, expression: str) -> re.Pattern:
|
|
||||||
expression_text = str(expression)
|
|
||||||
compiled = cls._compiled_regex_cache.get(expression_text)
|
|
||||||
if compiled is None:
|
|
||||||
try:
|
|
||||||
compiled = re.compile(expression_text)
|
|
||||||
except re.error as ex:
|
|
||||||
raise click.ClickException(
|
|
||||||
f"Pattern #{pattern_id} contains an invalid regex {expression_text!r}: {ex}"
|
|
||||||
)
|
|
||||||
cls._compiled_regex_cache[expression_text] = compiled
|
|
||||||
return compiled
|
|
||||||
|
|
||||||
def _coerce_pattern_fields(self, patternObj):
|
|
||||||
return {
|
|
||||||
"show_id": int(patternObj["show_id"]),
|
|
||||||
"pattern": str(patternObj["pattern"]),
|
|
||||||
"quality": int(patternObj.get("quality", 0) or 0),
|
|
||||||
"notes": str(patternObj.get("notes", "")),
|
|
||||||
}
|
|
||||||
|
|
||||||
def _coerce_media_tags(self, mediaTags):
|
|
||||||
return {
|
|
||||||
str(tagKey): str(tagValue)
|
|
||||||
for tagKey, tagValue in (mediaTags or {}).items()
|
|
||||||
}
|
|
||||||
|
|
||||||
def _normalize_track_descriptors(self, trackDescriptors):
|
|
||||||
if trackDescriptors is None:
|
|
||||||
raise InvalidPatternSchemaError(
|
|
||||||
"Patterns must define at least one track before they can be stored."
|
|
||||||
)
|
|
||||||
|
|
||||||
normalized_descriptors = []
|
|
||||||
for trackDescriptor in trackDescriptors:
|
|
||||||
if type(trackDescriptor) is not TrackDescriptor:
|
|
||||||
raise TypeError(
|
|
||||||
"PatternController: All track descriptors are required to be of type TrackDescriptor"
|
|
||||||
)
|
|
||||||
normalized_descriptors.append(trackDescriptor)
|
|
||||||
|
|
||||||
if not normalized_descriptors:
|
|
||||||
raise InvalidPatternSchemaError(
|
|
||||||
"Patterns must define at least one track before they can be stored."
|
|
||||||
)
|
|
||||||
|
|
||||||
normalized_descriptors = sorted(
|
|
||||||
normalized_descriptors, key=lambda descriptor: descriptor.getIndex()
|
|
||||||
)
|
|
||||||
|
|
||||||
index_set = {descriptor.getIndex() for descriptor in normalized_descriptors}
|
|
||||||
expected_indexes = set(range(len(normalized_descriptors)))
|
|
||||||
if index_set != expected_indexes:
|
|
||||||
raise click.ClickException(
|
|
||||||
"Pattern tracks must use a contiguous zero-based index order."
|
|
||||||
)
|
|
||||||
|
|
||||||
return normalized_descriptors
|
|
||||||
|
|
||||||
def _ensure_unique_pattern_definition(
|
|
||||||
self,
|
|
||||||
session,
|
|
||||||
show_id: int,
|
|
||||||
pattern_expression: str,
|
|
||||||
exclude_pattern_id: int | None = None,
|
|
||||||
):
|
|
||||||
query = session.query(Pattern).filter(
|
|
||||||
Pattern.show_id == show_id,
|
|
||||||
Pattern.pattern == pattern_expression,
|
|
||||||
)
|
|
||||||
if exclude_pattern_id is not None:
|
|
||||||
query = query.filter(Pattern.id != int(exclude_pattern_id))
|
|
||||||
|
|
||||||
existing_pattern = query.first()
|
|
||||||
if existing_pattern is not None:
|
|
||||||
raise click.ClickException(
|
|
||||||
f"Pattern {pattern_expression!r} already exists for show #{show_id}."
|
|
||||||
)
|
|
||||||
|
|
||||||
def _build_track_row(self, trackDescriptor: TrackDescriptor) -> Track:
|
|
||||||
track = Track(
|
|
||||||
track_type=int(trackDescriptor.getType().index()),
|
|
||||||
codec_name=str(trackDescriptor.getCodec().identifier()),
|
|
||||||
index=int(trackDescriptor.getIndex()),
|
|
||||||
source_index=int(trackDescriptor.getSourceIndex()),
|
|
||||||
disposition_flags=int(
|
|
||||||
TrackDisposition.toFlags(trackDescriptor.getDispositionSet())
|
|
||||||
),
|
|
||||||
audio_layout=trackDescriptor.getAudioLayout().index(),
|
|
||||||
)
|
|
||||||
|
|
||||||
for tagKey, tagValue in trackDescriptor.getTags().items():
|
|
||||||
if tagKey in self.__ignoreTrackKeys or tagKey in self.__removeTrackKeys:
|
|
||||||
continue
|
|
||||||
track.track_tags.append(TrackTag(key=str(tagKey), value=str(tagValue)))
|
|
||||||
|
|
||||||
return track
|
|
||||||
|
|
||||||
def _replace_pattern_schema(
|
|
||||||
self,
|
|
||||||
session,
|
|
||||||
pattern: Pattern,
|
|
||||||
mediaTags: dict[str, str],
|
|
||||||
trackDescriptors: list[TrackDescriptor],
|
|
||||||
):
|
|
||||||
for mediaTag in list(pattern.media_tags):
|
|
||||||
session.delete(mediaTag)
|
|
||||||
for track in list(pattern.tracks):
|
|
||||||
session.delete(track)
|
|
||||||
session.flush()
|
|
||||||
|
|
||||||
for tagKey, tagValue in mediaTags.items():
|
|
||||||
pattern.media_tags.append(MediaTag(key=str(tagKey), value=str(tagValue)))
|
|
||||||
|
|
||||||
for trackDescriptor in trackDescriptors:
|
|
||||||
pattern.tracks.append(self._build_track_row(trackDescriptor))
|
|
||||||
|
|
||||||
def _validate_persisted_pattern(self, pattern: Pattern):
|
|
||||||
if not pattern.tracks:
|
|
||||||
raise InvalidPatternSchemaError(
|
|
||||||
f"Pattern #{pattern.getId()} ({pattern.getPattern()!r}) is invalid because it has no tracks."
|
|
||||||
)
|
|
||||||
|
|
||||||
def savePatternSchema(
|
|
||||||
self,
|
|
||||||
patternObj,
|
|
||||||
trackDescriptors,
|
|
||||||
mediaTags=None,
|
|
||||||
patternId: int | None = None,
|
|
||||||
) -> int:
|
|
||||||
fields = self._coerce_pattern_fields(patternObj)
|
|
||||||
normalized_tracks = self._normalize_track_descriptors(trackDescriptors)
|
|
||||||
normalized_tags = self._coerce_media_tags(mediaTags)
|
|
||||||
session = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
session = self.Session()
|
|
||||||
self._ensure_unique_pattern_definition(
|
|
||||||
session,
|
|
||||||
fields["show_id"],
|
|
||||||
fields["pattern"],
|
|
||||||
exclude_pattern_id=patternId,
|
|
||||||
)
|
|
||||||
|
|
||||||
if patternId is None:
|
|
||||||
pattern = Pattern(
|
|
||||||
show_id=fields["show_id"],
|
|
||||||
pattern=fields["pattern"],
|
|
||||||
quality=fields["quality"],
|
|
||||||
notes=fields["notes"],
|
|
||||||
)
|
|
||||||
session.add(pattern)
|
|
||||||
session.flush()
|
|
||||||
else:
|
|
||||||
pattern = session.query(Pattern).filter(Pattern.id == int(patternId)).first()
|
|
||||||
if pattern is None:
|
|
||||||
raise click.ClickException(
|
|
||||||
f"PatternController.savePatternSchema(): Pattern #{patternId} not found"
|
|
||||||
)
|
|
||||||
pattern.show_id = fields["show_id"]
|
|
||||||
pattern.pattern = fields["pattern"]
|
|
||||||
pattern.quality = fields["quality"]
|
|
||||||
pattern.notes = fields["notes"]
|
|
||||||
|
|
||||||
self._replace_pattern_schema(
|
|
||||||
session,
|
|
||||||
pattern,
|
|
||||||
normalized_tags,
|
|
||||||
normalized_tracks,
|
|
||||||
)
|
|
||||||
|
|
||||||
session.commit()
|
|
||||||
self._clear_regex_cache()
|
|
||||||
return pattern.getId()
|
|
||||||
|
|
||||||
except click.ClickException:
|
|
||||||
raise
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(
|
|
||||||
f"PatternController.savePatternSchema(): {repr(ex)}"
|
|
||||||
)
|
|
||||||
finally:
|
|
||||||
if session is not None:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
def addPattern(self, patternObj, trackDescriptors=None, mediaTags=None):
|
|
||||||
return self.savePatternSchema(
|
|
||||||
patternObj,
|
|
||||||
trackDescriptors=trackDescriptors,
|
|
||||||
mediaTags=mediaTags,
|
|
||||||
)
|
|
||||||
|
|
||||||
def updatePattern(self, patternId, patternObj):
|
|
||||||
|
|
||||||
fields = self._coerce_pattern_fields(patternObj)
|
|
||||||
session = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
session = self.Session()
|
|
||||||
pattern = session.query(Pattern).filter(Pattern.id == int(patternId)).first()
|
|
||||||
|
|
||||||
if pattern is not None:
|
|
||||||
self._ensure_unique_pattern_definition(
|
|
||||||
session,
|
|
||||||
fields["show_id"],
|
|
||||||
fields["pattern"],
|
|
||||||
exclude_pattern_id=patternId,
|
|
||||||
)
|
|
||||||
self._validate_persisted_pattern(pattern)
|
|
||||||
|
|
||||||
pattern.show_id = fields["show_id"]
|
|
||||||
pattern.pattern = fields["pattern"]
|
|
||||||
pattern.quality = fields["quality"]
|
|
||||||
pattern.notes = fields["notes"]
|
|
||||||
|
|
||||||
session.commit()
|
|
||||||
self._clear_regex_cache()
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
except click.ClickException:
|
|
||||||
raise
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.updatePattern(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
if session is not None:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
def findPattern(self, patternObj):
|
|
||||||
session = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
session = self.Session()
|
|
||||||
pattern = (
|
|
||||||
session.query(Pattern)
|
|
||||||
.filter(
|
|
||||||
Pattern.show_id == int(patternObj["show_id"]),
|
|
||||||
Pattern.pattern == str(patternObj["pattern"]),
|
|
||||||
)
|
|
||||||
.first()
|
|
||||||
)
|
|
||||||
|
|
||||||
if pattern is not None:
|
|
||||||
return int(pattern.id)
|
|
||||||
return None
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.findPattern(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
if session is not None:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
def getPatternsForShow(self, showId: int) -> list[Pattern]:
|
|
||||||
|
|
||||||
if type(showId) is not int:
|
|
||||||
raise ValueError(
|
|
||||||
"PatternController.getPatternsForShow(): Argument showId is required to be of type int"
|
|
||||||
)
|
|
||||||
|
|
||||||
session = None
|
|
||||||
try:
|
|
||||||
session = self.Session()
|
|
||||||
return (
|
|
||||||
session.query(Pattern)
|
|
||||||
.filter(Pattern.show_id == int(showId))
|
|
||||||
.order_by(Pattern.id)
|
|
||||||
.all()
|
|
||||||
)
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.getPatternsForShow(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
if session is not None:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
def getPattern(self, patternId: int):
|
|
||||||
|
|
||||||
if type(patternId) is not int:
|
|
||||||
raise ValueError(
|
|
||||||
"PatternController.getPattern(): Argument patternId is required to be of type int"
|
|
||||||
)
|
|
||||||
|
|
||||||
session = None
|
|
||||||
try:
|
|
||||||
session = self.Session()
|
|
||||||
return session.query(Pattern).filter(Pattern.id == int(patternId)).first()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.getPattern(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
if session is not None:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
def deletePattern(self, patternId):
|
|
||||||
session = None
|
|
||||||
try:
|
|
||||||
session = self.Session()
|
|
||||||
pattern = session.query(Pattern).filter(Pattern.id == int(patternId)).first()
|
|
||||||
|
|
||||||
if pattern is not None:
|
|
||||||
session.delete(pattern)
|
|
||||||
session.commit()
|
|
||||||
self._clear_regex_cache()
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.deletePattern(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
if session is not None:
|
|
||||||
session.close()
|
|
||||||
|
|
||||||
def matchFilename(self, filename: str) -> dict:
|
|
||||||
"""Return {'match': regex match, 'pattern': Pattern} or {} when unmatched."""
|
|
||||||
session = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
session = self.Session()
|
|
||||||
matches = []
|
|
||||||
query = session.query(Pattern).order_by(Pattern.show_id, Pattern.id)
|
|
||||||
|
|
||||||
for pattern in query.all():
|
|
||||||
compiled = self._compile_pattern_expression(
|
|
||||||
pattern.getId(),
|
|
||||||
pattern.getPattern(),
|
|
||||||
)
|
|
||||||
patternMatch = compiled.search(str(filename))
|
|
||||||
if patternMatch is None:
|
|
||||||
continue
|
|
||||||
|
|
||||||
self._validate_persisted_pattern(pattern)
|
|
||||||
matches.append({"match": patternMatch, "pattern": pattern})
|
|
||||||
|
|
||||||
if not matches:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
if len(matches) > 1:
|
|
||||||
duplicateDescriptions = ", ".join(
|
|
||||||
[
|
|
||||||
f"show #{match['pattern'].getShowId()} pattern #{match['pattern'].getId()} {match['pattern'].getPattern()!r}"
|
|
||||||
for match in matches
|
|
||||||
]
|
|
||||||
)
|
|
||||||
raise DuplicatePatternMatchError(
|
|
||||||
f"Filename {filename!r} matched more than one pattern: {duplicateDescriptions}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return matches[0]
|
|
||||||
|
|
||||||
except click.ClickException:
|
|
||||||
raise
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.matchFilename(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
if session is not None:
|
|
||||||
session.close()
|
|
||||||
@@ -1,111 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from .show_controller import ShowController
|
|
||||||
from .pattern_controller import PatternController
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class PatternDeleteScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 2;
|
|
||||||
grid-rows: 2 auto;
|
|
||||||
grid-columns: 30 330;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, patternId = None, showId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__pc = PatternController(context = self.context)
|
|
||||||
self.__sc = ShowController(context = self.context)
|
|
||||||
|
|
||||||
self.__patternId = patternId
|
|
||||||
self.__pattern: Pattern = self.__pc.getPattern(patternId) if patternId is not None else {}
|
|
||||||
self.__showDescriptor = self.__sc.getShowDescriptor(showId) if showId is not None else {}
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
if self.__showDescriptor:
|
|
||||||
self.query_one("#showlabel", Static).update(f"{self.__showDescriptor.getId()} - {self.__showDescriptor.getName()} ({self.__showDescriptor.getYear()})")
|
|
||||||
if not self.__pattern is None:
|
|
||||||
self.query_one("#patternlabel", Static).update(str(self.__pattern.pattern))
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
yield Static("Are you sure to delete the following filename pattern?", id="toplabel", classes="two")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Static("Pattern")
|
|
||||||
yield Static("", id="patternlabel")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Static("from show")
|
|
||||||
yield Static("", id="showlabel")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Button("Delete", id="delete_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "delete_button":
|
|
||||||
|
|
||||||
if self.__patternId is None:
|
|
||||||
raise click.ClickException('PatternDeleteScreen.on_button_pressed(): pattern id is undefined')
|
|
||||||
|
|
||||||
if self.__pc.deletePattern(self.__patternId):
|
|
||||||
self.dismiss(self.__pattern)
|
|
||||||
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
@@ -1,656 +0,0 @@
|
|||||||
import click, re
|
|
||||||
from typing import List
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, Input, DataTable, TextArea
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
|
|
||||||
from .track_details_screen import TrackDetailsScreen
|
|
||||||
from .track_delete_screen import TrackDeleteScreen
|
|
||||||
|
|
||||||
from .tag_details_screen import TagDetailsScreen
|
|
||||||
from .tag_delete_screen import TagDeleteScreen
|
|
||||||
from .screen_support import build_screen_bootstrap, build_screen_controllers
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
from textual.widgets._data_table import CellDoesNotExist
|
|
||||||
|
|
||||||
from ffx.file_properties import FileProperties
|
|
||||||
from ffx.iso_language import IsoLanguage
|
|
||||||
from ffx.audio_layout import AudioLayout
|
|
||||||
|
|
||||||
from ffx.helper import formatRichColor, removeRichColor
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class PatternDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 7 17;
|
|
||||||
grid-rows: 2 2 2 2 2 2 6 2 2 8 2 2 8 2 2 2 2;
|
|
||||||
grid-columns: 25 25 25 25 25 25 25;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
.six {
|
|
||||||
column-span: 6;
|
|
||||||
}
|
|
||||||
.seven {
|
|
||||||
column-span: 7;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
.four_box {
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
|
|
||||||
.yellow {
|
|
||||||
tint: yellow 40%;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, patternId = None, showId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
bootstrap = build_screen_bootstrap(self.app.getContext())
|
|
||||||
self.context = bootstrap.context
|
|
||||||
|
|
||||||
self.__removeGlobalKeys = bootstrap.remove_global_keys
|
|
||||||
self.__ignoreGlobalKeys = bootstrap.ignore_global_keys
|
|
||||||
|
|
||||||
controllers = build_screen_controllers(
|
|
||||||
self.context,
|
|
||||||
pattern=True,
|
|
||||||
show=True,
|
|
||||||
track=True,
|
|
||||||
tag=True,
|
|
||||||
)
|
|
||||||
self.__pc = controllers['pattern']
|
|
||||||
self.__sc = controllers['show']
|
|
||||||
self.__tc = controllers['track']
|
|
||||||
self.__tac = controllers['tag']
|
|
||||||
|
|
||||||
self.__pattern : Pattern = self.__pc.getPattern(patternId) if patternId is not None else None
|
|
||||||
self.__showDescriptor = self.__sc.getShowDescriptor(showId) if showId is not None else None
|
|
||||||
self.__draftTracks : List[TrackDescriptor] = []
|
|
||||||
self.__draftTags : dict[str, str] = {}
|
|
||||||
|
|
||||||
|
|
||||||
def updateTracks(self):
|
|
||||||
|
|
||||||
self.tracksTable.clear()
|
|
||||||
|
|
||||||
tracks = self.getCurrentTrackDescriptors()
|
|
||||||
|
|
||||||
typeCounter = {}
|
|
||||||
|
|
||||||
td: TrackDescriptor
|
|
||||||
for td in tracks:
|
|
||||||
|
|
||||||
if (trackType := td.getType()) != TrackType.ATTACHMENT:
|
|
||||||
|
|
||||||
if not trackType in typeCounter.keys():
|
|
||||||
typeCounter[trackType] = 0
|
|
||||||
|
|
||||||
dispoSet = td.getDispositionSet()
|
|
||||||
|
|
||||||
trackLanguage = td.getLanguage()
|
|
||||||
audioLayout = td.getAudioLayout()
|
|
||||||
|
|
||||||
row = (td.getIndex(),
|
|
||||||
trackType.label(),
|
|
||||||
typeCounter[trackType],
|
|
||||||
td.getCodec().label(),
|
|
||||||
audioLayout.label() if trackType == TrackType.AUDIO
|
|
||||||
and audioLayout != AudioLayout.LAYOUT_UNDEFINED else ' ',
|
|
||||||
trackLanguage.label() if trackLanguage != IsoLanguage.UNDEFINED else ' ',
|
|
||||||
td.getTitle(),
|
|
||||||
'Yes' if TrackDisposition.DEFAULT in dispoSet else 'No',
|
|
||||||
'Yes' if TrackDisposition.FORCED in dispoSet else 'No',
|
|
||||||
td.getSourceIndex())
|
|
||||||
|
|
||||||
self.tracksTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
typeCounter[trackType] += 1
|
|
||||||
|
|
||||||
|
|
||||||
def getCurrentTrackDescriptors(self) -> List[TrackDescriptor]:
|
|
||||||
if self.__pattern is not None:
|
|
||||||
return self.__tc.findSiblingDescriptors(self.__pattern.getId())
|
|
||||||
return list(self.__draftTracks)
|
|
||||||
|
|
||||||
|
|
||||||
def normalizeDraftTracks(self):
|
|
||||||
|
|
||||||
typeCounter = {}
|
|
||||||
|
|
||||||
for index, trackDescriptor in enumerate(self.__draftTracks):
|
|
||||||
trackDescriptor.setIndex(index)
|
|
||||||
|
|
||||||
trackType = trackDescriptor.getType()
|
|
||||||
subIndex = typeCounter.get(trackType, 0)
|
|
||||||
trackDescriptor.setSubIndex(subIndex)
|
|
||||||
typeCounter[trackType] = subIndex + 1
|
|
||||||
|
|
||||||
if trackDescriptor.getSourceIndex() < 0:
|
|
||||||
trackDescriptor.setSourceIndex(index)
|
|
||||||
|
|
||||||
|
|
||||||
def swapTracks(self, trackIndex1: int, trackIndex2: int):
|
|
||||||
|
|
||||||
ti1 = int(trackIndex1)
|
|
||||||
ti2 = int(trackIndex2)
|
|
||||||
|
|
||||||
if self.__pattern is None:
|
|
||||||
numSiblings = len(self.__draftTracks)
|
|
||||||
|
|
||||||
if ti1 < 0 or ti1 >= numSiblings:
|
|
||||||
raise ValueError(f"PatternDetailsScreen.swapTracks(): trackIndex1 ({ti1}) is out of range ({numSiblings})")
|
|
||||||
|
|
||||||
if ti2 < 0 or ti2 >= numSiblings:
|
|
||||||
raise ValueError(f"PatternDetailsScreen.swapTracks(): trackIndex2 ({ti2}) is out of range ({numSiblings})")
|
|
||||||
|
|
||||||
self.__draftTracks[ti1], self.__draftTracks[ti2] = self.__draftTracks[ti2], self.__draftTracks[ti1]
|
|
||||||
self.normalizeDraftTracks()
|
|
||||||
self.updateTracks()
|
|
||||||
return
|
|
||||||
|
|
||||||
siblingDescriptors: List[TrackDescriptor] = self.__tc.findSiblingDescriptors(self.__pattern.getId())
|
|
||||||
|
|
||||||
numSiblings = len(siblingDescriptors)
|
|
||||||
|
|
||||||
if ti1 < 0 or ti1 >= numSiblings:
|
|
||||||
raise ValueError(f"PatternDetailsScreen.swapTracks(): trackIndex1 ({ti1}) is out of range ({numSiblings})")
|
|
||||||
|
|
||||||
if ti2 < 0 or ti2 >= numSiblings:
|
|
||||||
raise ValueError(f"PatternDetailsScreen.swapTracks(): trackIndex2 ({ti2}) is out of range ({numSiblings})")
|
|
||||||
|
|
||||||
sibling1 = siblingDescriptors[trackIndex1]
|
|
||||||
sibling2 = siblingDescriptors[trackIndex2]
|
|
||||||
|
|
||||||
# raise click.ClickException(f"siblings id1={sibling1.getId()} id2={sibling2.getId()}")
|
|
||||||
|
|
||||||
subIndex2 = sibling2.getSubIndex()
|
|
||||||
|
|
||||||
sibling2.setIndex(sibling1.getIndex())
|
|
||||||
sibling2.setSubIndex(sibling1.getSubIndex())
|
|
||||||
|
|
||||||
sibling1.setIndex(trackIndex2)
|
|
||||||
sibling1.setSubIndex(subIndex2)
|
|
||||||
|
|
||||||
if not self.__tc.updateTrack(sibling1.getId(), sibling1):
|
|
||||||
raise click.ClickException('Update sibling1 failed')
|
|
||||||
if not self.__tc.updateTrack(sibling2.getId(), sibling2):
|
|
||||||
raise click.ClickException('Update sibling2 failed')
|
|
||||||
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
|
|
||||||
def updateTags(self):
|
|
||||||
|
|
||||||
self.tagsTable.clear()
|
|
||||||
|
|
||||||
tags = (
|
|
||||||
self.__tac.findAllMediaTags(self.__pattern.getId())
|
|
||||||
if self.__pattern is not None
|
|
||||||
else self.__draftTags
|
|
||||||
)
|
|
||||||
|
|
||||||
for tagKey, tagValue in tags.items():
|
|
||||||
|
|
||||||
textColor = None
|
|
||||||
if tagKey in self.__ignoreGlobalKeys:
|
|
||||||
textColor = 'blue'
|
|
||||||
if tagKey in self.__removeGlobalKeys:
|
|
||||||
textColor = 'red'
|
|
||||||
|
|
||||||
row = (formatRichColor(tagKey, textColor), formatRichColor(tagValue, textColor))
|
|
||||||
self.tagsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
self.query_one("#showlabel", Static).update(f"{self.__showDescriptor.getId()} - {self.__showDescriptor.getName()} ({self.__showDescriptor.getYear()})")
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
|
|
||||||
self.query_one("#pattern_input", Input).value = str(self.__pattern.getPattern())
|
|
||||||
|
|
||||||
if self.__pattern and self.__pattern.quality:
|
|
||||||
self.query_one("#quality_input", Input).value = str(self.__pattern.quality)
|
|
||||||
|
|
||||||
if self.__pattern and self.__pattern.notes:
|
|
||||||
self.query_one("#notes_textarea", TextArea).text = str(self.__pattern.notes)
|
|
||||||
|
|
||||||
self.updateTags()
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
|
|
||||||
self.tagsTable = DataTable(classes="seven")
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_tag_key = self.tagsTable.add_column("Key", width=50)
|
|
||||||
self.column_key_tag_value = self.tagsTable.add_column("Value", width=100)
|
|
||||||
|
|
||||||
self.tagsTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
self.tracksTable = DataTable(id="tracks_table", classes="seven")
|
|
||||||
|
|
||||||
self.column_key_track_index = self.tracksTable.add_column("Index", width=5)
|
|
||||||
self.column_key_track_type = self.tracksTable.add_column("Type", width=10)
|
|
||||||
self.column_key_track_sub_index = self.tracksTable.add_column("SubIndex", width=8)
|
|
||||||
self.column_key_track_codec = self.tracksTable.add_column("Codec", width=10)
|
|
||||||
self.column_key_track_audio_layout = self.tracksTable.add_column("Layout", width=10)
|
|
||||||
self.column_key_track_language = self.tracksTable.add_column("Language", width=15)
|
|
||||||
self.column_key_track_title = self.tracksTable.add_column("Title", width=48)
|
|
||||||
self.column_key_track_default = self.tracksTable.add_column("Default", width=8)
|
|
||||||
self.column_key_track_forced = self.tracksTable.add_column("Forced", width=8)
|
|
||||||
self.column_key_track_source_index = self.tracksTable.add_column("SrcIndex", width=8)
|
|
||||||
|
|
||||||
self.tracksTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
# 1
|
|
||||||
yield Static("Edit filename pattern" if self.__pattern is not None else "New filename pattern", id="toplabel")
|
|
||||||
yield Input(type="text", id="pattern_input", classes="six")
|
|
||||||
|
|
||||||
# 2
|
|
||||||
yield Static("from show")
|
|
||||||
yield Static("", id="showlabel", classes="five")
|
|
||||||
yield Button("Substitute pattern", id="pattern_button")
|
|
||||||
|
|
||||||
# 3
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# 4
|
|
||||||
yield Static("Quality")
|
|
||||||
yield Input(type="integer", id="quality_input")
|
|
||||||
yield Static(' ', classes="five")
|
|
||||||
|
|
||||||
|
|
||||||
# 5
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
|
|
||||||
# 6
|
|
||||||
yield Static("Notes")
|
|
||||||
yield Static(" ", classes="six")
|
|
||||||
|
|
||||||
# 7
|
|
||||||
yield TextArea(id="notes_textarea", classes="four_box seven")
|
|
||||||
|
|
||||||
|
|
||||||
# 8
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
# 9
|
|
||||||
yield Static("Media Tags")
|
|
||||||
yield Button("Add", id="button_add_tag")
|
|
||||||
yield Button("Edit", id="button_edit_tag")
|
|
||||||
yield Button("Delete", id="button_delete_tag")
|
|
||||||
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
# 10
|
|
||||||
yield self.tagsTable
|
|
||||||
|
|
||||||
# 11
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
# 12
|
|
||||||
yield Static("Streams")
|
|
||||||
yield Button("Add", id="button_add_track")
|
|
||||||
yield Button("Edit", id="button_edit_track")
|
|
||||||
yield Button("Delete", id="button_delete_track")
|
|
||||||
|
|
||||||
yield Static(" ")
|
|
||||||
yield Button("Up", id="button_track_up")
|
|
||||||
yield Button("Down", id="button_track_down")
|
|
||||||
|
|
||||||
# 13
|
|
||||||
yield self.tracksTable
|
|
||||||
|
|
||||||
# 14
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
# 15
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
# 16
|
|
||||||
yield Button("Save", id="save_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 17
|
|
||||||
yield Static(" ", classes="seven")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
def getPatternFromInput(self):
|
|
||||||
return str(self.query_one("#pattern_input", Input).value)
|
|
||||||
|
|
||||||
def getQualityFromInput(self):
|
|
||||||
try:
|
|
||||||
return int(self.query_one("#quality_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
return 0
|
|
||||||
|
|
||||||
def getNotesFromInput(self):
|
|
||||||
return str(self.query_one("#notes_textarea", TextArea).text)
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedTrackDescriptor(self):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
row_key, col_key = self.tracksTable.coordinate_to_cell_key(self.tracksTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_track_data = self.tracksTable.get_row(row_key)
|
|
||||||
|
|
||||||
trackIndex = int(selected_track_data[0])
|
|
||||||
trackSubIndex = int(selected_track_data[2])
|
|
||||||
|
|
||||||
for trackDescriptor in self.getCurrentTrackDescriptors():
|
|
||||||
if (trackDescriptor.getIndex() == trackIndex
|
|
||||||
and trackDescriptor.getSubIndex() == trackSubIndex):
|
|
||||||
return trackDescriptor
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedTag(self):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.tagsTable.coordinate_to_cell_key(self.tagsTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_tag_data = self.tagsTable.get_row(row_key)
|
|
||||||
|
|
||||||
tagKey = removeRichColor(selected_tag_data[0])
|
|
||||||
tagValue = removeRichColor(selected_tag_data[1])
|
|
||||||
|
|
||||||
return tagKey, tagValue
|
|
||||||
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
# Check if the button pressed is the one we are interested in
|
|
||||||
if event.button.id == "save_button":
|
|
||||||
|
|
||||||
patternDescriptor = {}
|
|
||||||
patternDescriptor['show_id'] = self.__showDescriptor.getId()
|
|
||||||
patternDescriptor['pattern'] = self.getPatternFromInput()
|
|
||||||
patternDescriptor['quality'] = self.getQualityFromInput()
|
|
||||||
patternDescriptor['notes'] = self.getNotesFromInput()
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
|
|
||||||
if self.__pc.updatePattern(self.__pattern.getId(), patternDescriptor):
|
|
||||||
self.dismiss(patternDescriptor)
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
else:
|
|
||||||
patternId = self.__pc.savePatternSchema(
|
|
||||||
patternDescriptor,
|
|
||||||
trackDescriptors=self.__draftTracks,
|
|
||||||
mediaTags=self.__draftTags,
|
|
||||||
)
|
|
||||||
if patternId:
|
|
||||||
self.dismiss(patternDescriptor)
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
|
|
||||||
numTracks = len(self.getCurrentTrackDescriptors())
|
|
||||||
|
|
||||||
if event.button.id == "button_add_track":
|
|
||||||
self.app.push_screen(
|
|
||||||
TrackDetailsScreen(
|
|
||||||
patternId=self.__pattern.getId() if self.__pattern is not None else None,
|
|
||||||
patternLabel=self.getPatternFromInput(),
|
|
||||||
siblingTrackDescriptors=self.getCurrentTrackDescriptors(),
|
|
||||||
index=numTracks,
|
|
||||||
),
|
|
||||||
self.handle_add_track,
|
|
||||||
)
|
|
||||||
|
|
||||||
selectedTrack = self.getSelectedTrackDescriptor()
|
|
||||||
if selectedTrack is not None:
|
|
||||||
if event.button.id == "button_edit_track":
|
|
||||||
self.app.push_screen(
|
|
||||||
TrackDetailsScreen(
|
|
||||||
trackDescriptor=selectedTrack,
|
|
||||||
patternId=self.__pattern.getId() if self.__pattern is not None else None,
|
|
||||||
patternLabel=self.getPatternFromInput(),
|
|
||||||
siblingTrackDescriptors=self.getCurrentTrackDescriptors(),
|
|
||||||
),
|
|
||||||
self.handle_edit_track,
|
|
||||||
)
|
|
||||||
if event.button.id == "button_delete_track":
|
|
||||||
self.app.push_screen(
|
|
||||||
TrackDeleteScreen(trackDescriptor = selectedTrack),
|
|
||||||
self.handle_delete_track,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "button_add_tag":
|
|
||||||
self.app.push_screen(TagDetailsScreen(), self.handle_update_tag)
|
|
||||||
|
|
||||||
if event.button.id == "button_edit_tag":
|
|
||||||
selectedTag = self.getSelectedTag()
|
|
||||||
if selectedTag is not None:
|
|
||||||
tagKey, tagValue = selectedTag
|
|
||||||
self.app.push_screen(TagDetailsScreen(key=tagKey, value=tagValue), self.handle_update_tag)
|
|
||||||
|
|
||||||
if event.button.id == "button_delete_tag":
|
|
||||||
selectedTag = self.getSelectedTag()
|
|
||||||
if selectedTag is not None:
|
|
||||||
tagKey, tagValue = selectedTag
|
|
||||||
self.app.push_screen(TagDeleteScreen(key=tagKey, value=tagValue), self.handle_delete_tag)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "pattern_button":
|
|
||||||
|
|
||||||
pattern = self.query_one("#pattern_input", Input).value
|
|
||||||
|
|
||||||
patternMatch = re.search(FileProperties.SE_INDICATOR_PATTERN, pattern)
|
|
||||||
|
|
||||||
if patternMatch:
|
|
||||||
self.query_one("#pattern_input", Input).value = pattern.replace(patternMatch.group(1),
|
|
||||||
FileProperties.SE_INDICATOR_PATTERN)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "button_track_up":
|
|
||||||
|
|
||||||
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
|
|
||||||
if selectedTrackDescriptor is not None:
|
|
||||||
selectedTrackIndex = selectedTrackDescriptor.getIndex()
|
|
||||||
|
|
||||||
if selectedTrackIndex > 0 and selectedTrackIndex < self.tracksTable.row_count:
|
|
||||||
correspondingTrackIndex = selectedTrackIndex - 1
|
|
||||||
self.swapTracks(selectedTrackIndex, correspondingTrackIndex)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "button_track_down":
|
|
||||||
|
|
||||||
selectedTrackDescriptor = self.getSelectedTrackDescriptor()
|
|
||||||
if selectedTrackDescriptor is not None:
|
|
||||||
selectedTrackIndex = selectedTrackDescriptor.getIndex()
|
|
||||||
|
|
||||||
if selectedTrackIndex >= 0 and selectedTrackIndex < (self.tracksTable.row_count - 1):
|
|
||||||
correspondingTrackIndex = selectedTrackIndex + 1
|
|
||||||
self.swapTracks(selectedTrackIndex, correspondingTrackIndex)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_add_track(self, trackDescriptor : TrackDescriptor):
|
|
||||||
if trackDescriptor is None:
|
|
||||||
return
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
self.__tc.addTrack(trackDescriptor, patternId=self.__pattern.getId())
|
|
||||||
else:
|
|
||||||
self.__draftTracks.append(trackDescriptor)
|
|
||||||
self.normalizeDraftTracks()
|
|
||||||
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
|
|
||||||
def handle_edit_track(self, trackDescriptor : TrackDescriptor):
|
|
||||||
if trackDescriptor is None:
|
|
||||||
return
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
if not self.__tc.updateTrack(trackDescriptor.getId(), trackDescriptor):
|
|
||||||
raise click.ClickException("PatternDetailsScreen.handle_edit_track(): track update failed")
|
|
||||||
else:
|
|
||||||
selectedTrack = self.getSelectedTrackDescriptor()
|
|
||||||
for index, currentTrack in enumerate(self.__draftTracks):
|
|
||||||
if (selectedTrack is not None
|
|
||||||
and currentTrack.getIndex() == selectedTrack.getIndex()
|
|
||||||
and currentTrack.getSubIndex() == selectedTrack.getSubIndex()):
|
|
||||||
self.__draftTracks[index] = trackDescriptor
|
|
||||||
break
|
|
||||||
self.normalizeDraftTracks()
|
|
||||||
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
|
|
||||||
def handle_delete_track(self, trackDescriptor : TrackDescriptor):
|
|
||||||
if trackDescriptor is None:
|
|
||||||
return
|
|
||||||
|
|
||||||
if self.__pattern is not None:
|
|
||||||
track = self.__tc.getTrack(trackDescriptor.getPatternId(), trackDescriptor.getIndex())
|
|
||||||
|
|
||||||
if track is None:
|
|
||||||
raise click.ClickException(
|
|
||||||
f"Track is none: patternId={trackDescriptor.getPatternId()} type={trackDescriptor.getType()} subIndex={trackDescriptor.getSubIndex()}"
|
|
||||||
)
|
|
||||||
|
|
||||||
self.__tc.deleteTrack(track.getId())
|
|
||||||
else:
|
|
||||||
self.__draftTracks = [
|
|
||||||
currentTrack
|
|
||||||
for currentTrack in self.__draftTracks
|
|
||||||
if not (
|
|
||||||
currentTrack.getIndex() == trackDescriptor.getIndex()
|
|
||||||
and currentTrack.getSubIndex() == trackDescriptor.getSubIndex()
|
|
||||||
)
|
|
||||||
]
|
|
||||||
self.normalizeDraftTracks()
|
|
||||||
|
|
||||||
self.updateTracks()
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def handle_update_tag(self, tag):
|
|
||||||
if tag is None:
|
|
||||||
return
|
|
||||||
|
|
||||||
if self.__pattern is None:
|
|
||||||
self.__draftTags[str(tag[0])] = str(tag[1])
|
|
||||||
else:
|
|
||||||
if self.__tac.updateMediaTag(self.__pattern.getId(), tag[0], tag[1]) is None:
|
|
||||||
raise click.ClickException("PatternDetailsScreen.handle_update_tag(): tag update failed")
|
|
||||||
|
|
||||||
self.updateTags()
|
|
||||||
|
|
||||||
def handle_delete_tag(self, tag):
|
|
||||||
if tag is None:
|
|
||||||
return
|
|
||||||
|
|
||||||
if self.__pattern is None:
|
|
||||||
self.__draftTags.pop(str(tag[0]), None)
|
|
||||||
self.updateTags()
|
|
||||||
return
|
|
||||||
|
|
||||||
if self.__tac.deleteMediaTagByKey(self.__pattern.getId(), tag[0]):
|
|
||||||
self.updateTags()
|
|
||||||
else:
|
|
||||||
raise click.ClickException('tag delete failed')
|
|
||||||
@@ -1,169 +0,0 @@
|
|||||||
import os
|
|
||||||
import shlex
|
|
||||||
import subprocess
|
|
||||||
from typing import Iterable, List
|
|
||||||
|
|
||||||
from .logging_utils import get_ffx_logger
|
|
||||||
|
|
||||||
COMMAND_TIMED_OUT_RETURN_CODE = 124
|
|
||||||
COMMAND_NOT_FOUND_RETURN_CODE = 127
|
|
||||||
MIN_NICENESS = -20
|
|
||||||
MAX_NICENESS = 19
|
|
||||||
DISABLED_NICENESS_SENTINEL = 99
|
|
||||||
DISABLED_CPU_PERCENT_SENTINEL = 0
|
|
||||||
MIN_CPU_PERCENT = 1
|
|
||||||
MAX_CPU_PERCENT = 100
|
|
||||||
|
|
||||||
|
|
||||||
def formatCommandSequence(commandSequence: Iterable[str]) -> str:
|
|
||||||
return shlex.join([str(token) for token in commandSequence])
|
|
||||||
|
|
||||||
|
|
||||||
def normalizeNiceness(niceness) -> int | None:
|
|
||||||
if niceness is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
niceness = int(niceness)
|
|
||||||
if niceness == DISABLED_NICENESS_SENTINEL:
|
|
||||||
return None
|
|
||||||
|
|
||||||
if niceness < MIN_NICENESS or niceness > MAX_NICENESS:
|
|
||||||
raise ValueError(
|
|
||||||
f"Niceness must be between {MIN_NICENESS} and {MAX_NICENESS}, "
|
|
||||||
+ f"or {DISABLED_NICENESS_SENTINEL} to disable."
|
|
||||||
)
|
|
||||||
|
|
||||||
return niceness
|
|
||||||
|
|
||||||
|
|
||||||
def getPresentCpuCount() -> int:
|
|
||||||
if hasattr(os, 'sched_getaffinity'):
|
|
||||||
affinity = os.sched_getaffinity(0)
|
|
||||||
if affinity:
|
|
||||||
return len(affinity)
|
|
||||||
|
|
||||||
cpuCount = os.cpu_count()
|
|
||||||
return cpuCount if cpuCount and cpuCount > 0 else 1
|
|
||||||
|
|
||||||
|
|
||||||
def normalizeCpuPercent(cpuPercent) -> int | None:
|
|
||||||
if cpuPercent is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
cpuPercent = str(cpuPercent).strip()
|
|
||||||
if cpuPercent.endswith('%'):
|
|
||||||
percentValue = int(cpuPercent[:-1].strip())
|
|
||||||
if percentValue == DISABLED_CPU_PERCENT_SENTINEL:
|
|
||||||
return None
|
|
||||||
|
|
||||||
if percentValue < MIN_CPU_PERCENT or percentValue > MAX_CPU_PERCENT:
|
|
||||||
raise ValueError(
|
|
||||||
f"CPU percentage must be between {MIN_CPU_PERCENT}% and {MAX_CPU_PERCENT}%, "
|
|
||||||
+ f"or {DISABLED_CPU_PERCENT_SENTINEL} to disable."
|
|
||||||
)
|
|
||||||
|
|
||||||
return percentValue * getPresentCpuCount()
|
|
||||||
|
|
||||||
cpuPercent = int(cpuPercent)
|
|
||||||
if cpuPercent == DISABLED_CPU_PERCENT_SENTINEL:
|
|
||||||
return None
|
|
||||||
|
|
||||||
if cpuPercent < MIN_CPU_PERCENT:
|
|
||||||
raise ValueError(
|
|
||||||
"CPU limit must be a positive absolute value such as 200, "
|
|
||||||
+ f"a percentage such as 25%, or {DISABLED_CPU_PERCENT_SENTINEL} to disable."
|
|
||||||
)
|
|
||||||
|
|
||||||
return cpuPercent
|
|
||||||
|
|
||||||
|
|
||||||
def getWrappedCommandSequence(commandSequence: List[str], context: dict = None) -> List[str]:
|
|
||||||
"""
|
|
||||||
niceness: -20 to 19, disabled when unset
|
|
||||||
cpu limit: positive absolute cpulimit value, or a machine-wide percentage
|
|
||||||
|
|
||||||
When both limits are configured, cpulimit wraps a nice-adjusted command:
|
|
||||||
cpulimit -l <cpu> -- nice -n <niceness> <command>
|
|
||||||
"""
|
|
||||||
|
|
||||||
resourceLimits = (context or {}).get('resource_limits', {})
|
|
||||||
niceness = normalizeNiceness(resourceLimits.get('niceness'))
|
|
||||||
cpu_percent = normalizeCpuPercent(
|
|
||||||
resourceLimits.get('cpu_limit', resourceLimits.get('cpu_percent'))
|
|
||||||
)
|
|
||||||
wrappedCommandSequence = [str(token) for token in commandSequence]
|
|
||||||
|
|
||||||
if niceness is not None:
|
|
||||||
wrappedCommandSequence = ['nice', '-n', str(niceness)] + wrappedCommandSequence
|
|
||||||
if cpu_percent is not None:
|
|
||||||
wrappedCommandSequence = ['cpulimit', '-l', str(cpu_percent), '--'] + wrappedCommandSequence
|
|
||||||
|
|
||||||
return wrappedCommandSequence
|
|
||||||
|
|
||||||
|
|
||||||
def getProcessTimeoutSeconds(context: dict = None, timeoutSeconds: float = None):
|
|
||||||
if timeoutSeconds is None:
|
|
||||||
timeoutSeconds = (context or {}).get('resource_limits', {}).get('timeout_seconds')
|
|
||||||
|
|
||||||
if timeoutSeconds is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
timeoutSeconds = float(timeoutSeconds)
|
|
||||||
|
|
||||||
return timeoutSeconds if timeoutSeconds > 0 else None
|
|
||||||
|
|
||||||
|
|
||||||
def executeProcess(
|
|
||||||
commandSequence: List[str],
|
|
||||||
directory: str = None,
|
|
||||||
context: dict = None,
|
|
||||||
timeoutSeconds: float = None,
|
|
||||||
):
|
|
||||||
|
|
||||||
logger = context['logger'] if context is not None and 'logger' in context else get_ffx_logger()
|
|
||||||
wrappedCommandSequence = getWrappedCommandSequence(commandSequence, context=context)
|
|
||||||
timeoutSeconds = getProcessTimeoutSeconds(context=context, timeoutSeconds=timeoutSeconds)
|
|
||||||
|
|
||||||
logger.debug(
|
|
||||||
"executeProcess() cwd=%s timeout=%s command=%s",
|
|
||||||
directory or '.',
|
|
||||||
timeoutSeconds if timeoutSeconds is not None else 'none',
|
|
||||||
formatCommandSequence(wrappedCommandSequence),
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
completed = subprocess.run(
|
|
||||||
wrappedCommandSequence,
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
cwd=directory,
|
|
||||||
timeout=timeoutSeconds,
|
|
||||||
check=False,
|
|
||||||
)
|
|
||||||
except FileNotFoundError as ex:
|
|
||||||
error = (
|
|
||||||
"Command not found while running "
|
|
||||||
+ f"{formatCommandSequence(wrappedCommandSequence)}: {ex.filename or ex}"
|
|
||||||
)
|
|
||||||
logger.error(error)
|
|
||||||
return '', error, COMMAND_NOT_FOUND_RETURN_CODE
|
|
||||||
except subprocess.TimeoutExpired as ex:
|
|
||||||
stdout = ex.stdout or ''
|
|
||||||
stderr = ex.stderr or ''
|
|
||||||
error = (
|
|
||||||
f"Command timed out after {timeoutSeconds} seconds while running "
|
|
||||||
+ formatCommandSequence(wrappedCommandSequence)
|
|
||||||
)
|
|
||||||
if stderr:
|
|
||||||
error = f"{error}\n{stderr}"
|
|
||||||
logger.error(error)
|
|
||||||
return stdout, error, COMMAND_TIMED_OUT_RETURN_CODE
|
|
||||||
|
|
||||||
if completed.returncode != 0:
|
|
||||||
logger.warning(
|
|
||||||
"executeProcess() rc=%s command=%s",
|
|
||||||
completed.returncode,
|
|
||||||
formatCommandSequence(wrappedCommandSequence),
|
|
||||||
)
|
|
||||||
|
|
||||||
return completed.stdout, completed.stderr, completed.returncode
|
|
||||||
@@ -1,65 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from dataclasses import dataclass
|
|
||||||
|
|
||||||
from .pattern_controller import PatternController
|
|
||||||
from .show_controller import ShowController
|
|
||||||
from .shifted_season_controller import ShiftedSeasonController
|
|
||||||
from .tag_controller import TagController
|
|
||||||
from .tmdb_controller import TmdbController
|
|
||||||
from .track_controller import TrackController
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(frozen=True)
|
|
||||||
class ScreenBootstrap:
|
|
||||||
context: dict
|
|
||||||
configuration_data: dict
|
|
||||||
signature_tags: dict
|
|
||||||
remove_global_keys: list
|
|
||||||
ignore_global_keys: list
|
|
||||||
remove_track_keys: list
|
|
||||||
ignore_track_keys: list
|
|
||||||
|
|
||||||
|
|
||||||
def build_screen_bootstrap(context: dict) -> ScreenBootstrap:
|
|
||||||
configurationData = context['config'].getData()
|
|
||||||
metadataConfiguration = configurationData.get('metadata', {})
|
|
||||||
streamMetadataConfiguration = metadataConfiguration.get('streams', {})
|
|
||||||
|
|
||||||
return ScreenBootstrap(
|
|
||||||
context=context,
|
|
||||||
configuration_data=configurationData,
|
|
||||||
signature_tags=metadataConfiguration.get('signature', {}),
|
|
||||||
remove_global_keys=metadataConfiguration.get('remove', []),
|
|
||||||
ignore_global_keys=metadataConfiguration.get('ignore', []),
|
|
||||||
remove_track_keys=streamMetadataConfiguration.get('remove', []),
|
|
||||||
ignore_track_keys=streamMetadataConfiguration.get('ignore', []),
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def build_screen_controllers(
|
|
||||||
context: dict,
|
|
||||||
*,
|
|
||||||
pattern: bool = False,
|
|
||||||
show: bool = False,
|
|
||||||
track: bool = False,
|
|
||||||
tag: bool = False,
|
|
||||||
tmdb: bool = False,
|
|
||||||
shifted_season: bool = False,
|
|
||||||
) -> dict[str, object]:
|
|
||||||
controllers = {}
|
|
||||||
|
|
||||||
if pattern:
|
|
||||||
controllers['pattern'] = PatternController(context=context)
|
|
||||||
if show:
|
|
||||||
controllers['show'] = ShowController(context=context)
|
|
||||||
if track:
|
|
||||||
controllers['track'] = TrackController(context=context)
|
|
||||||
if tag:
|
|
||||||
controllers['tag'] = TagController(context=context)
|
|
||||||
if tmdb:
|
|
||||||
controllers['tmdb'] = TmdbController()
|
|
||||||
if shifted_season:
|
|
||||||
controllers['shifted_season'] = ShiftedSeasonController(context=context)
|
|
||||||
|
|
||||||
return controllers
|
|
||||||
@@ -1,12 +0,0 @@
|
|||||||
from textual.app import ComposeResult
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Footer, Placeholder
|
|
||||||
|
|
||||||
|
|
||||||
class SettingsScreen(Screen):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
context = self.app.getContext()
|
|
||||||
def compose(self) -> ComposeResult:
|
|
||||||
yield Placeholder("Settings Screen")
|
|
||||||
yield Footer()
|
|
||||||
@@ -1,230 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.model.shifted_season import ShiftedSeason
|
|
||||||
|
|
||||||
|
|
||||||
class EpisodeOrderException(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
class RangeOverlapException(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class ShiftedSeasonController():
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
def checkShiftedSeason(self, showId: int, shiftedSeasonObj: dict, shiftedSeasonId: int = 0):
|
|
||||||
"""
|
|
||||||
Check if for a particula season
|
|
||||||
|
|
||||||
shiftedSeasonId
|
|
||||||
"""
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
originalSeason = shiftedSeasonObj['original_season']
|
|
||||||
firstEpisode = int(shiftedSeasonObj['first_episode'])
|
|
||||||
lastEpisode = int(shiftedSeasonObj['last_episode'])
|
|
||||||
|
|
||||||
q = s.query(ShiftedSeason).filter(ShiftedSeason.show_id == int(showId))
|
|
||||||
if shiftedSeasonId:
|
|
||||||
q = q.filter(ShiftedSeason.id != int(shiftedSeasonId))
|
|
||||||
|
|
||||||
siblingShiftedSeason: ShiftedSeason
|
|
||||||
for siblingShiftedSeason in q.all():
|
|
||||||
|
|
||||||
siblingOriginalSeason = siblingShiftedSeason.getOriginalSeason
|
|
||||||
siblingFirstEpisode = siblingShiftedSeason.getFirstEpisode()
|
|
||||||
siblingLastEpisode = siblingShiftedSeason.getLastEpisode()
|
|
||||||
|
|
||||||
if (originalSeason == siblingOriginalSeason
|
|
||||||
and lastEpisode >= siblingFirstEpisode
|
|
||||||
and siblingLastEpisode >= firstEpisode):
|
|
||||||
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShiftedSeasonController.addShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def addShiftedSeason(self, showId: int, shiftedSeasonObj: dict):
|
|
||||||
|
|
||||||
if type(showId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.addShiftedSeason(): Argument showId is required to be of type int")
|
|
||||||
|
|
||||||
if type(shiftedSeasonObj) is not dict:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.addShiftedSeason(): Argument shiftedSeasonObj is required to be of type dict")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
firstEpisode = int(shiftedSeasonObj['first_episode'])
|
|
||||||
lastEpisode = int(shiftedSeasonObj['last_episode'])
|
|
||||||
|
|
||||||
if lastEpisode < firstEpisode:
|
|
||||||
raise EpisodeOrderException()
|
|
||||||
|
|
||||||
q = s.query(ShiftedSeason).filter(ShiftedSeason.show_id == int(showId))
|
|
||||||
|
|
||||||
shiftedSeason = ShiftedSeason(show_id = int(showId),
|
|
||||||
original_season = int(shiftedSeasonObj['original_season']),
|
|
||||||
first_episode = firstEpisode,
|
|
||||||
last_episode = lastEpisode,
|
|
||||||
season_offset = int(shiftedSeasonObj['season_offset']),
|
|
||||||
episode_offset = int(shiftedSeasonObj['episode_offset']))
|
|
||||||
s.add(shiftedSeason)
|
|
||||||
s.commit()
|
|
||||||
return shiftedSeason.getId()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShiftedSeasonController.addShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def updateShiftedSeason(self, shiftedSeasonId: int, shiftedSeasonObj: dict):
|
|
||||||
|
|
||||||
if type(shiftedSeasonId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.updateShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
|
||||||
|
|
||||||
if type(shiftedSeasonObj) is not dict:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.updateShiftedSeason(): Argument shiftedSeasonObj is required to be of type dict")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
shiftedSeason = s.query(ShiftedSeason).filter(ShiftedSeason.id == int(shiftedSeasonId)).first()
|
|
||||||
|
|
||||||
if shiftedSeason is not None:
|
|
||||||
|
|
||||||
shiftedSeason.original_season = int(shiftedSeasonObj['original_season'])
|
|
||||||
shiftedSeason.first_episode = int(shiftedSeasonObj['first_episode'])
|
|
||||||
shiftedSeason.last_episode = int(shiftedSeasonObj['last_episode'])
|
|
||||||
shiftedSeason.season_offset = int(shiftedSeasonObj['season_offset'])
|
|
||||||
shiftedSeason.episode_offset = int(shiftedSeasonObj['episode_offset'])
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShiftedSeasonController.updateShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def findShiftedSeason(self, showId: int, originalSeason: int, firstEpisode: int, lastEpisode: int):
|
|
||||||
|
|
||||||
if type(showId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
|
||||||
|
|
||||||
if type(originalSeason) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument originalSeason is required to be of type int")
|
|
||||||
|
|
||||||
if type(firstEpisode) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument firstEpisode is required to be of type int")
|
|
||||||
|
|
||||||
if type(lastEpisode) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument lastEpisode is required to be of type int")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
shiftedSeason = s.query(ShiftedSeason).filter(
|
|
||||||
ShiftedSeason.show_id == int(showId),
|
|
||||||
ShiftedSeason.original_season == int(originalSeason),
|
|
||||||
ShiftedSeason.first_episode == int(firstEpisode),
|
|
||||||
ShiftedSeason.last_episode == int(lastEpisode),
|
|
||||||
).first()
|
|
||||||
|
|
||||||
return shiftedSeason.getId() if shiftedSeason is not None else None
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.findShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def getShiftedSeasonSiblings(self, showId: int):
|
|
||||||
|
|
||||||
if type(showId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.getShiftedSeasonSiblings(): Argument shiftedSeasonId is required to be of type int")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
q = s.query(ShiftedSeason).filter(ShiftedSeason.show_id == int(showId))
|
|
||||||
|
|
||||||
return q.all()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"PatternController.getShiftedSeasonSiblings(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def getShiftedSeason(self, shiftedSeasonId: int):
|
|
||||||
|
|
||||||
if type(shiftedSeasonId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.getShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
return s.query(ShiftedSeason).filter(ShiftedSeason.id == int(shiftedSeasonId)).first()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShiftedSeasonController.getShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def deleteShiftedSeason(self, shiftedSeasonId):
|
|
||||||
|
|
||||||
if type(shiftedSeasonId) is not int:
|
|
||||||
raise ValueError(f"ShiftedSeasonController.deleteShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
shiftedSeason = s.query(ShiftedSeason).filter(ShiftedSeason.id == int(shiftedSeasonId)).first()
|
|
||||||
|
|
||||||
if shiftedSeason is not None:
|
|
||||||
|
|
||||||
#DAFUQ: https://stackoverflow.com/a/19245058
|
|
||||||
# q.delete()
|
|
||||||
s.delete(shiftedSeason)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShiftedSeasonController.deleteShiftedSeason(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def shiftSeason(self, showId, season, episode):
|
|
||||||
|
|
||||||
shiftedSeasonEntry: ShiftedSeason
|
|
||||||
for shiftedSeasonEntry in self.getShiftedSeasonSiblings(showId):
|
|
||||||
|
|
||||||
if (season == shiftedSeasonEntry.getOriginalSeason()
|
|
||||||
and (shiftedSeasonEntry.getFirstEpisode() == -1 or episode >= shiftedSeasonEntry.getFirstEpisode())
|
|
||||||
and (shiftedSeasonEntry.getLastEpisode() == -1 or episode <= shiftedSeasonEntry.getLastEpisode())):
|
|
||||||
|
|
||||||
shiftedSeason = season + shiftedSeasonEntry.getSeasonOffset()
|
|
||||||
shiftedEpisode = episode + shiftedSeasonEntry.getEpisodeOffset()
|
|
||||||
|
|
||||||
self.context['logger'].info(f"Shifting season: {season} episode: {episode} "
|
|
||||||
+f"-> season: {shiftedSeason} episode: {shiftedEpisode}")
|
|
||||||
|
|
||||||
return shiftedSeason, shiftedEpisode
|
|
||||||
|
|
||||||
return season, episode
|
|
||||||
@@ -1,125 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from .shifted_season_controller import ShiftedSeasonController
|
|
||||||
|
|
||||||
from ffx.model.shifted_season import ShiftedSeason
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class ShiftedSeasonDeleteScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 2;
|
|
||||||
grid-rows: 2 auto;
|
|
||||||
grid-columns: 30 330;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, showId = None, shiftedSeasonId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__ssc = ShiftedSeasonController(context = self.context)
|
|
||||||
|
|
||||||
self._showId = showId
|
|
||||||
self.__shiftedSeasonId = shiftedSeasonId
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
shiftedSeason: ShiftedSeason = self.__ssc.getShiftedSeason(self.__shiftedSeasonId)
|
|
||||||
|
|
||||||
self.query_one("#static_show_id", Static).update(str(self._showId))
|
|
||||||
self.query_one("#static_original_season", Static).update(str(shiftedSeason.getOriginalSeason()))
|
|
||||||
self.query_one("#static_first_episode", Static).update(str(shiftedSeason.getFirstEpisode()))
|
|
||||||
self.query_one("#static_last_episode", Static).update(str(shiftedSeason.getLastEpisode()))
|
|
||||||
self.query_one("#static_season_offset", Static).update(str(shiftedSeason.getSeasonOffset()))
|
|
||||||
self.query_one("#static_episode_offset", Static).update(str(shiftedSeason.getEpisodeOffset()))
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
yield Static("Are you sure to delete the following shifted season?", id="toplabel", classes="two")
|
|
||||||
|
|
||||||
yield Static(" ", classes="two")
|
|
||||||
|
|
||||||
yield Static("from show")
|
|
||||||
yield Static(" ", id="static_show_id")
|
|
||||||
|
|
||||||
yield Static(" ", classes="two")
|
|
||||||
|
|
||||||
yield Static("Original season")
|
|
||||||
yield Static(" ", id="static_original_season")
|
|
||||||
|
|
||||||
yield Static("First episode")
|
|
||||||
yield Static(" ", id="static_first_episode")
|
|
||||||
|
|
||||||
yield Static("Last episode")
|
|
||||||
yield Static(" ", id="static_last_episode")
|
|
||||||
|
|
||||||
yield Static("Season offset")
|
|
||||||
yield Static(" ", id="static_season_offset")
|
|
||||||
|
|
||||||
yield Static("Episode offset")
|
|
||||||
yield Static(" ", id="static_episode_offset")
|
|
||||||
|
|
||||||
yield Static(" ", classes="two")
|
|
||||||
|
|
||||||
yield Button("Delete", id="delete_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "delete_button":
|
|
||||||
|
|
||||||
if self.__shiftedSeasonId is None:
|
|
||||||
raise click.ClickException('ShiftedSeasonDeleteScreen.on_button_pressed(): shifted season id is undefined')
|
|
||||||
|
|
||||||
if self.__ssc.deleteShiftedSeason(self.__shiftedSeasonId):
|
|
||||||
self.dismiss(self.__shiftedSeasonId)
|
|
||||||
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
@@ -1,221 +0,0 @@
|
|||||||
from typing import List
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, Input
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from .shifted_season_controller import ShiftedSeasonController
|
|
||||||
|
|
||||||
from ffx.model.shifted_season import ShiftedSeason
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class ShiftedSeasonDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 3 10;
|
|
||||||
grid-rows: 2 2 2 2 2 2 2 2 2 2;
|
|
||||||
grid-columns: 40 40 40;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
.six {
|
|
||||||
column-span: 6;
|
|
||||||
}
|
|
||||||
.seven {
|
|
||||||
column-span: 7;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
|
|
||||||
.yellow {
|
|
||||||
tint: yellow 40%;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, showId = None, shiftedSeasonId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__ssc = ShiftedSeasonController(context = self.context)
|
|
||||||
|
|
||||||
self.__showId = showId
|
|
||||||
self.__shiftedSeasonId = shiftedSeasonId
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
if self.__shiftedSeasonId is not None:
|
|
||||||
shiftedSeason: ShiftedSeason = self.__ssc.getShiftedSeason(self.__shiftedSeasonId)
|
|
||||||
|
|
||||||
originalSeason = shiftedSeason.getOriginalSeason()
|
|
||||||
self.query_one("#input_original_season", Input).value = str(originalSeason)
|
|
||||||
|
|
||||||
firstEpisode = shiftedSeason.getFirstEpisode()
|
|
||||||
self.query_one("#input_first_episode", Input).value = str(firstEpisode) if firstEpisode != -1 else ''
|
|
||||||
|
|
||||||
lastEpisode = shiftedSeason.getLastEpisode()
|
|
||||||
self.query_one("#input_last_episode", Input).value = str(lastEpisode) if lastEpisode != -1 else ''
|
|
||||||
|
|
||||||
seasonOffset = shiftedSeason.getSeasonOffset()
|
|
||||||
self.query_one("#input_season_offset", Input).value = str(seasonOffset) if seasonOffset else ''
|
|
||||||
|
|
||||||
episodeOffset = shiftedSeason.getEpisodeOffset()
|
|
||||||
self.query_one("#input_episode_offset", Input).value = str(episodeOffset) if episodeOffset else ''
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
# 1
|
|
||||||
yield Static("Edit shifted season" if self.__shiftedSeasonId is not None else "New shifted season", id="toplabel", classes="three")
|
|
||||||
|
|
||||||
# 2
|
|
||||||
yield Static(" ", classes="three")
|
|
||||||
|
|
||||||
# 3
|
|
||||||
yield Static("Original season")
|
|
||||||
yield Input(id="input_original_season", classes="two")
|
|
||||||
|
|
||||||
# 4
|
|
||||||
yield Static("First Episode")
|
|
||||||
yield Input(id="input_first_episode", classes="two")
|
|
||||||
|
|
||||||
# 5
|
|
||||||
yield Static("Last Episode")
|
|
||||||
yield Input(id="input_last_episode", classes="two")
|
|
||||||
|
|
||||||
# 6
|
|
||||||
yield Static("Season offset")
|
|
||||||
yield Input(id="input_season_offset", classes="two")
|
|
||||||
|
|
||||||
# 7
|
|
||||||
yield Static("Episode offset")
|
|
||||||
yield Input(id="input_episode_offset", classes="two")
|
|
||||||
|
|
||||||
# 8
|
|
||||||
yield Static(" ", classes="three")
|
|
||||||
|
|
||||||
# 9
|
|
||||||
yield Button("Save", id="save_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
# 10
|
|
||||||
yield Static(" ", classes="three")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
def getShiftedSeasonObjFromInput(self):
|
|
||||||
|
|
||||||
shiftedSeasonObj = {}
|
|
||||||
|
|
||||||
originalSeason = self.query_one("#input_original_season", Input).value
|
|
||||||
if not originalSeason:
|
|
||||||
return None
|
|
||||||
shiftedSeasonObj['original_season'] = int(originalSeason)
|
|
||||||
|
|
||||||
try:
|
|
||||||
shiftedSeasonObj['first_episode'] = int(self.query_one("#input_first_episode", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
shiftedSeasonObj['first_episode'] = -1
|
|
||||||
|
|
||||||
try:
|
|
||||||
shiftedSeasonObj['last_episode'] = int(self.query_one("#input_last_episode", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
shiftedSeasonObj['last_episode'] = -1
|
|
||||||
|
|
||||||
try:
|
|
||||||
shiftedSeasonObj['season_offset'] = int(self.query_one("#input_season_offset", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
shiftedSeasonObj['season_offset'] = 0
|
|
||||||
|
|
||||||
try:
|
|
||||||
shiftedSeasonObj['episode_offset'] = int(self.query_one("#input_episode_offset", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
shiftedSeasonObj['episode_offset'] = 0
|
|
||||||
|
|
||||||
return shiftedSeasonObj
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
# Check if the button pressed is the one we are interested in
|
|
||||||
if event.button.id == "save_button":
|
|
||||||
|
|
||||||
shiftedSeasonObj = self.getShiftedSeasonObjFromInput()
|
|
||||||
|
|
||||||
if shiftedSeasonObj is not None:
|
|
||||||
|
|
||||||
if self.__shiftedSeasonId is not None:
|
|
||||||
|
|
||||||
if self.__ssc.checkShiftedSeason(self.__showId, shiftedSeasonObj,
|
|
||||||
shiftedSeasonId = self.__shiftedSeasonId):
|
|
||||||
if self.__ssc.updateShiftedSeason(self.__shiftedSeasonId, shiftedSeasonObj):
|
|
||||||
self.dismiss((self.__shiftedSeasonId, shiftedSeasonObj))
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
else:
|
|
||||||
if self.__ssc.checkShiftedSeason(self.__showId, shiftedSeasonObj):
|
|
||||||
self.__shiftedSeasonId = self.__ssc.addShiftedSeason(self.__showId, shiftedSeasonObj)
|
|
||||||
self.dismiss((self.__shiftedSeasonId, shiftedSeasonObj))
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
@@ -1,120 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.model.show import Show
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
class ShowController():
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
|
|
||||||
def getShowDescriptor(self, showId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
show = s.query(Show).filter(Show.id == showId).first()
|
|
||||||
|
|
||||||
if show is not None:
|
|
||||||
return show.getDescriptor(self.context)
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShowController.getShowDescriptor(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def getShow(self, showId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
return s.query(Show).filter(Show.id == showId).first()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShowController.getShow(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def getAllShows(self):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
return s.query(Show).all()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShowController.getAllShows(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def updateShow(self, showDescriptor: ShowDescriptor):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
currentShow = s.query(Show).filter(Show.id == showDescriptor.getId()).first()
|
|
||||||
|
|
||||||
if currentShow is None:
|
|
||||||
show = Show(id = int(showDescriptor.getId()),
|
|
||||||
name = str(showDescriptor.getName()),
|
|
||||||
year = int(showDescriptor.getYear()),
|
|
||||||
index_season_digits = showDescriptor.getIndexSeasonDigits(),
|
|
||||||
index_episode_digits = showDescriptor.getIndexEpisodeDigits(),
|
|
||||||
indicator_season_digits = showDescriptor.getIndicatorSeasonDigits(),
|
|
||||||
indicator_episode_digits = showDescriptor.getIndicatorEpisodeDigits())
|
|
||||||
|
|
||||||
s.add(show)
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
changed = False
|
|
||||||
if currentShow.name != str(showDescriptor.getName()):
|
|
||||||
currentShow.name = str(showDescriptor.getName())
|
|
||||||
changed = True
|
|
||||||
if currentShow.year != int(showDescriptor.getYear()):
|
|
||||||
currentShow.year = int(showDescriptor.getYear())
|
|
||||||
changed = True
|
|
||||||
|
|
||||||
if currentShow.index_season_digits != int(showDescriptor.getIndexSeasonDigits()):
|
|
||||||
currentShow.index_season_digits = int(showDescriptor.getIndexSeasonDigits())
|
|
||||||
changed = True
|
|
||||||
if currentShow.index_episode_digits != int(showDescriptor.getIndexEpisodeDigits()):
|
|
||||||
currentShow.index_episode_digits = int(showDescriptor.getIndexEpisodeDigits())
|
|
||||||
changed = True
|
|
||||||
if currentShow.indicator_season_digits != int(showDescriptor.getIndicatorSeasonDigits()):
|
|
||||||
currentShow.indicator_season_digits = int(showDescriptor.getIndicatorSeasonDigits())
|
|
||||||
changed = True
|
|
||||||
if currentShow.indicator_episode_digits != int(showDescriptor.getIndicatorEpisodeDigits()):
|
|
||||||
currentShow.indicator_episode_digits = int(showDescriptor.getIndicatorEpisodeDigits())
|
|
||||||
changed = True
|
|
||||||
|
|
||||||
if changed:
|
|
||||||
s.commit()
|
|
||||||
return changed
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShowController.updateShow(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def deleteShow(self, show_id):
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
show = s.query(Show).filter(Show.id == int(show_id)).first()
|
|
||||||
|
|
||||||
if show is not None:
|
|
||||||
|
|
||||||
#DAFUQ: https://stackoverflow.com/a/19245058
|
|
||||||
# q.delete()
|
|
||||||
s.delete(show)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"ShowController.deleteShow(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
@@ -1,95 +0,0 @@
|
|||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from .show_controller import ShowController
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class ShowDeleteScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 2;
|
|
||||||
grid-rows: 2 auto;
|
|
||||||
grid-columns: 30 auto;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, showId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__sc = ShowController(context = self.context)
|
|
||||||
|
|
||||||
self.__showDescriptor = self.__sc.getShowDescriptor(showId) if showId is not None else {}
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
self.query_one("#showlabel", Static).update(f"{self.__showDescriptor.getId()} - {self.__showDescriptor.getName()} ({self.__showDescriptor.getYear()})")
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
yield Static("Are you sure to delete the following show?", id="toplabel", classes="two")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Static("", id="showlabel")
|
|
||||||
yield Static("")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Static("", classes="two")
|
|
||||||
|
|
||||||
yield Button("Delete", id="delete_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "delete_button":
|
|
||||||
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
if self.__sc.deleteShow(self.__showDescriptor.getId()):
|
|
||||||
self.dismiss(self.__showDescriptor)
|
|
||||||
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
@@ -1,145 +0,0 @@
|
|||||||
from .configuration_controller import ConfigurationController
|
|
||||||
from .constants import (
|
|
||||||
DEFAULT_SHOW_INDEX_EPISODE_DIGITS,
|
|
||||||
DEFAULT_SHOW_INDEX_SEASON_DIGITS,
|
|
||||||
DEFAULT_SHOW_INDICATOR_EPISODE_DIGITS,
|
|
||||||
DEFAULT_SHOW_INDICATOR_SEASON_DIGITS,
|
|
||||||
)
|
|
||||||
from .logging_utils import get_ffx_logger
|
|
||||||
|
|
||||||
|
|
||||||
class ShowDescriptor():
|
|
||||||
"""This class represents the structural content of a media file including streams and metadata"""
|
|
||||||
|
|
||||||
CONTEXT_KEY = 'context'
|
|
||||||
|
|
||||||
ID_KEY = 'id'
|
|
||||||
NAME_KEY = 'name'
|
|
||||||
YEAR_KEY = 'year'
|
|
||||||
|
|
||||||
INDEX_SEASON_DIGITS_KEY = 'index_season_digits'
|
|
||||||
INDEX_EPISODE_DIGITS_KEY = 'index_episode_digits'
|
|
||||||
INDICATOR_SEASON_DIGITS_KEY = 'indicator_season_digits'
|
|
||||||
INDICATOR_EPISODE_DIGITS_KEY = 'indicator_episode_digits'
|
|
||||||
|
|
||||||
DEFAULT_INDEX_SEASON_DIGITS = DEFAULT_SHOW_INDEX_SEASON_DIGITS
|
|
||||||
DEFAULT_INDEX_EPISODE_DIGITS = DEFAULT_SHOW_INDEX_EPISODE_DIGITS
|
|
||||||
DEFAULT_INDICATOR_SEASON_DIGITS = DEFAULT_SHOW_INDICATOR_SEASON_DIGITS
|
|
||||||
DEFAULT_INDICATOR_EPISODE_DIGITS = DEFAULT_SHOW_INDICATOR_EPISODE_DIGITS
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def getDefaultDigitLengths(cls, context: dict | None = None) -> dict[str, int]:
|
|
||||||
configurationData = {}
|
|
||||||
|
|
||||||
if context is not None:
|
|
||||||
configController = context.get('config')
|
|
||||||
if configController is not None and hasattr(configController, 'getData'):
|
|
||||||
configurationData = configController.getData()
|
|
||||||
|
|
||||||
return {
|
|
||||||
cls.INDEX_SEASON_DIGITS_KEY: ConfigurationController.getConfiguredIntegerValue(
|
|
||||||
configurationData,
|
|
||||||
ConfigurationController.DEFAULT_INDEX_SEASON_DIGITS_CONFIG_KEY,
|
|
||||||
cls.DEFAULT_INDEX_SEASON_DIGITS,
|
|
||||||
),
|
|
||||||
cls.INDEX_EPISODE_DIGITS_KEY: ConfigurationController.getConfiguredIntegerValue(
|
|
||||||
configurationData,
|
|
||||||
ConfigurationController.DEFAULT_INDEX_EPISODE_DIGITS_CONFIG_KEY,
|
|
||||||
cls.DEFAULT_INDEX_EPISODE_DIGITS,
|
|
||||||
),
|
|
||||||
cls.INDICATOR_SEASON_DIGITS_KEY: ConfigurationController.getConfiguredIntegerValue(
|
|
||||||
configurationData,
|
|
||||||
ConfigurationController.DEFAULT_INDICATOR_SEASON_DIGITS_CONFIG_KEY,
|
|
||||||
cls.DEFAULT_INDICATOR_SEASON_DIGITS,
|
|
||||||
),
|
|
||||||
cls.INDICATOR_EPISODE_DIGITS_KEY: ConfigurationController.getConfiguredIntegerValue(
|
|
||||||
configurationData,
|
|
||||||
ConfigurationController.DEFAULT_INDICATOR_EPISODE_DIGITS_CONFIG_KEY,
|
|
||||||
cls.DEFAULT_INDICATOR_EPISODE_DIGITS,
|
|
||||||
),
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
if ShowDescriptor.CONTEXT_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.CONTEXT_KEY]) is not dict:
|
|
||||||
raise TypeError(
|
|
||||||
f"ShowDescriptor.__init__(): Argument {ShowDescriptor.CONTEXT_KEY} is required to be of type dict"
|
|
||||||
)
|
|
||||||
self.__context = kwargs[ShowDescriptor.CONTEXT_KEY]
|
|
||||||
self.__logger = self.__context['logger']
|
|
||||||
else:
|
|
||||||
self.__context = {}
|
|
||||||
self.__logger = get_ffx_logger()
|
|
||||||
|
|
||||||
if ShowDescriptor.ID_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.ID_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.ID_KEY} is required to be of type int")
|
|
||||||
self.__showId = kwargs[ShowDescriptor.ID_KEY]
|
|
||||||
else:
|
|
||||||
self.__showId = -1
|
|
||||||
|
|
||||||
if ShowDescriptor.NAME_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.NAME_KEY]) is not str:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.NAME_KEY} is required to be of type str")
|
|
||||||
self.__showName = kwargs[ShowDescriptor.NAME_KEY]
|
|
||||||
else:
|
|
||||||
self.__showName = ''
|
|
||||||
|
|
||||||
if ShowDescriptor.YEAR_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.YEAR_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.YEAR_KEY} is required to be of type int")
|
|
||||||
self.__showYear = kwargs[ShowDescriptor.YEAR_KEY]
|
|
||||||
else:
|
|
||||||
self.__showYear = -1
|
|
||||||
|
|
||||||
defaultDigitLengths = self.getDefaultDigitLengths(self.__context)
|
|
||||||
|
|
||||||
if ShowDescriptor.INDEX_SEASON_DIGITS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.INDEX_SEASON_DIGITS_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.INDEX_SEASON_DIGITS_KEY} is required to be of type int")
|
|
||||||
self.__indexSeasonDigits = kwargs[ShowDescriptor.INDEX_SEASON_DIGITS_KEY]
|
|
||||||
else:
|
|
||||||
self.__indexSeasonDigits = defaultDigitLengths[ShowDescriptor.INDEX_SEASON_DIGITS_KEY]
|
|
||||||
|
|
||||||
if ShowDescriptor.INDEX_EPISODE_DIGITS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.INDEX_EPISODE_DIGITS_KEY} is required to be of type int")
|
|
||||||
self.__indexEpisodeDigits = kwargs[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY]
|
|
||||||
else:
|
|
||||||
self.__indexEpisodeDigits = defaultDigitLengths[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY]
|
|
||||||
|
|
||||||
if ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY} is required to be of type int")
|
|
||||||
self.__indicatorSeasonDigits = kwargs[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY]
|
|
||||||
else:
|
|
||||||
self.__indicatorSeasonDigits = defaultDigitLengths[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY]
|
|
||||||
|
|
||||||
if ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY]) is not int:
|
|
||||||
raise TypeError(f"ShowDescriptor.__init__(): Argument {ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY} is required to be of type int")
|
|
||||||
self.__indicatorEpisodeDigits = kwargs[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY]
|
|
||||||
else:
|
|
||||||
self.__indicatorEpisodeDigits = defaultDigitLengths[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY]
|
|
||||||
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return self.__showId
|
|
||||||
def getName(self):
|
|
||||||
return self.__showName
|
|
||||||
def getYear(self):
|
|
||||||
return self.__showYear
|
|
||||||
|
|
||||||
def getIndexSeasonDigits(self):
|
|
||||||
return self.__indexSeasonDigits
|
|
||||||
def getIndexEpisodeDigits(self):
|
|
||||||
return self.__indexEpisodeDigits
|
|
||||||
def getIndicatorSeasonDigits(self):
|
|
||||||
return self.__indicatorSeasonDigits
|
|
||||||
def getIndicatorEpisodeDigits(self):
|
|
||||||
return self.__indicatorEpisodeDigits
|
|
||||||
|
|
||||||
def getFilenamePrefix(self):
|
|
||||||
return f"{self.__showName} ({str(self.__showYear)})"
|
|
||||||
@@ -1,486 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, DataTable, Input
|
|
||||||
from textual.containers import Grid
|
|
||||||
from textual.widgets._data_table import CellDoesNotExist
|
|
||||||
|
|
||||||
from .pattern_details_screen import PatternDetailsScreen
|
|
||||||
from .pattern_delete_screen import PatternDeleteScreen
|
|
||||||
|
|
||||||
from .show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
from .shifted_season_details_screen import ShiftedSeasonDetailsScreen
|
|
||||||
from .shifted_season_delete_screen import ShiftedSeasonDeleteScreen
|
|
||||||
|
|
||||||
from ffx.model.shifted_season import ShiftedSeason
|
|
||||||
|
|
||||||
from .helper import filterFilename
|
|
||||||
from .screen_support import build_screen_bootstrap, build_screen_controllers
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class ShowDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 5 16;
|
|
||||||
grid-rows: 2 2 2 2 2 2 2 2 2 2 2 9 2 9 2 2;
|
|
||||||
grid-columns: 30 30 30 30 30;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
column-span: 2;
|
|
||||||
min-height: 8;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
BINDINGS = [
|
|
||||||
("a", "add_pattern", "Add Pattern"),
|
|
||||||
("e", "edit_pattern", "Edit Pattern"),
|
|
||||||
("r", "remove_pattern", "Remove Pattern"),
|
|
||||||
]
|
|
||||||
|
|
||||||
def __init__(self, showId = None):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
bootstrap = build_screen_bootstrap(self.app.getContext())
|
|
||||||
self.context = bootstrap.context
|
|
||||||
|
|
||||||
controllers = build_screen_controllers(
|
|
||||||
self.context,
|
|
||||||
pattern=True,
|
|
||||||
show=True,
|
|
||||||
tmdb=True,
|
|
||||||
shifted_season=True,
|
|
||||||
)
|
|
||||||
self.__sc = controllers['show']
|
|
||||||
self.__pc = controllers['pattern']
|
|
||||||
self.__tc = controllers['tmdb']
|
|
||||||
self.__ssc = controllers['shifted_season']
|
|
||||||
|
|
||||||
self.__showDescriptor = self.__sc.getShowDescriptor(showId) if showId is not None else None
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def updateShiftedSeasons(self):
|
|
||||||
|
|
||||||
self.shiftedSeasonsTable.clear()
|
|
||||||
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
|
|
||||||
showId = int(self.__showDescriptor.getId())
|
|
||||||
|
|
||||||
shiftedSeason: ShiftedSeason
|
|
||||||
for shiftedSeason in self.__ssc.getShiftedSeasonSiblings(showId=showId):
|
|
||||||
|
|
||||||
shiftedSeasonObj = shiftedSeason.getObj()
|
|
||||||
|
|
||||||
firstEpisode = shiftedSeasonObj['first_episode']
|
|
||||||
firstEpisodeStr = str(firstEpisode) if firstEpisode != -1 else ''
|
|
||||||
|
|
||||||
lastEpisode = shiftedSeasonObj['last_episode']
|
|
||||||
lastEpisodeStr = str(lastEpisode) if lastEpisode != -1 else ''
|
|
||||||
|
|
||||||
row = (shiftedSeasonObj['original_season'],
|
|
||||||
firstEpisodeStr,
|
|
||||||
lastEpisodeStr,
|
|
||||||
shiftedSeasonObj['season_offset'],
|
|
||||||
shiftedSeasonObj['episode_offset'])
|
|
||||||
|
|
||||||
self.shiftedSeasonsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
if self.__showDescriptor is not None:
|
|
||||||
|
|
||||||
showId = int(self.__showDescriptor.getId())
|
|
||||||
|
|
||||||
self.query_one("#id_static", Static).update(str(showId))
|
|
||||||
self.query_one("#name_input", Input).value = str(self.__showDescriptor.getName())
|
|
||||||
self.query_one("#year_input", Input).value = str(self.__showDescriptor.getYear())
|
|
||||||
|
|
||||||
self.query_one("#index_season_digits_input", Input).value = str(self.__showDescriptor.getIndexSeasonDigits())
|
|
||||||
self.query_one("#index_episode_digits_input", Input).value = str(self.__showDescriptor.getIndexEpisodeDigits())
|
|
||||||
self.query_one("#indicator_season_digits_input", Input).value = str(self.__showDescriptor.getIndicatorSeasonDigits())
|
|
||||||
self.query_one("#indicator_episode_digits_input", Input).value = str(self.__showDescriptor.getIndicatorEpisodeDigits())
|
|
||||||
|
|
||||||
|
|
||||||
#raise click.ClickException(f"show_id {showId}")
|
|
||||||
for pattern in self.__pc.getPatternsForShow(showId):
|
|
||||||
row = (pattern.getPattern(),)
|
|
||||||
self.patternTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
self.updateShiftedSeasons()
|
|
||||||
|
|
||||||
else:
|
|
||||||
defaultDigitLengths = ShowDescriptor.getDefaultDigitLengths(self.context)
|
|
||||||
|
|
||||||
self.query_one("#index_season_digits_input", Input).value = str(
|
|
||||||
defaultDigitLengths[ShowDescriptor.INDEX_SEASON_DIGITS_KEY]
|
|
||||||
)
|
|
||||||
self.query_one("#index_episode_digits_input", Input).value = str(
|
|
||||||
defaultDigitLengths[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY]
|
|
||||||
)
|
|
||||||
self.query_one("#indicator_season_digits_input", Input).value = str(
|
|
||||||
defaultDigitLengths[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY]
|
|
||||||
)
|
|
||||||
self.query_one("#indicator_episode_digits_input", Input).value = str(
|
|
||||||
defaultDigitLengths[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY]
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedPatternDescriptor(self):
|
|
||||||
|
|
||||||
selectedPattern = {}
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.patternTable.coordinate_to_cell_key(self.patternTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_row_data = self.patternTable.get_row(row_key)
|
|
||||||
|
|
||||||
selectedPattern['show_id'] = self.__showDescriptor.getId()
|
|
||||||
selectedPattern['pattern'] = str(selected_row_data[0])
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return selectedPattern
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedShiftedSeasonObjFromInput(self):
|
|
||||||
|
|
||||||
shiftedSeasonObj = {}
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.shiftedSeasonsTable.coordinate_to_cell_key(self.shiftedSeasonsTable.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_row_data = self.shiftedSeasonsTable.get_row(row_key)
|
|
||||||
|
|
||||||
shiftedSeasonObj['original_season'] = int(selected_row_data[0])
|
|
||||||
shiftedSeasonObj['first_episode'] = int(selected_row_data[1]) if selected_row_data[1].isnumeric() else -1
|
|
||||||
shiftedSeasonObj['last_episode'] = int(selected_row_data[2]) if selected_row_data[2].isnumeric() else -1
|
|
||||||
shiftedSeasonObj['season_offset'] = int(selected_row_data[3]) if selected_row_data[3].isnumeric() else 0
|
|
||||||
shiftedSeasonObj['episode_offset'] = int(selected_row_data[4]) if selected_row_data[4].isnumeric() else 0
|
|
||||||
|
|
||||||
|
|
||||||
if self.__showDescriptor is not None:
|
|
||||||
|
|
||||||
showId = int(self.__showDescriptor.getId())
|
|
||||||
|
|
||||||
shiftedSeasonId = self.__ssc.findShiftedSeason(showId,
|
|
||||||
originalSeason=shiftedSeasonObj['original_season'],
|
|
||||||
firstEpisode=shiftedSeasonObj['first_episode'],
|
|
||||||
lastEpisode=shiftedSeasonObj['last_episode'])
|
|
||||||
if shiftedSeasonId is not None:
|
|
||||||
shiftedSeasonObj['id'] = shiftedSeasonId
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return shiftedSeasonObj
|
|
||||||
|
|
||||||
|
|
||||||
def action_add_pattern(self):
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
self.app.push_screen(PatternDetailsScreen(showId = self.__showDescriptor.getId()), self.handle_add_pattern)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_add_pattern(self, screenResult):
|
|
||||||
|
|
||||||
pattern = (screenResult['pattern'],)
|
|
||||||
self.patternTable.add_row(*map(str, pattern))
|
|
||||||
|
|
||||||
|
|
||||||
def action_edit_pattern(self):
|
|
||||||
|
|
||||||
selectedPatternDescriptor = self.getSelectedPatternDescriptor()
|
|
||||||
|
|
||||||
if selectedPatternDescriptor:
|
|
||||||
|
|
||||||
selectedPatternId = self.__pc.findPattern(selectedPatternDescriptor)
|
|
||||||
|
|
||||||
if selectedPatternId is None:
|
|
||||||
raise click.ClickException(f"ShowDetailsScreen.action_edit_pattern(): Pattern to edit has no id")
|
|
||||||
|
|
||||||
self.app.push_screen(PatternDetailsScreen(patternId = selectedPatternId, showId = self.__showDescriptor.getId()), self.handle_edit_pattern) # <-
|
|
||||||
|
|
||||||
|
|
||||||
def handle_edit_pattern(self, screenResult):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
row_key, col_key = self.patternTable.coordinate_to_cell_key(self.patternTable.cursor_coordinate)
|
|
||||||
self.patternTable.update_cell(row_key, self.column_key_pattern, screenResult['pattern'])
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def action_remove_pattern(self):
|
|
||||||
|
|
||||||
selectedPatternDescriptor = self.getSelectedPatternDescriptor()
|
|
||||||
|
|
||||||
if selectedPatternDescriptor:
|
|
||||||
|
|
||||||
selectedPatternId = self.__pc.findPattern(selectedPatternDescriptor)
|
|
||||||
|
|
||||||
if selectedPatternId is None:
|
|
||||||
raise click.ClickException(f"ShowDetailsScreen.action_remove_pattern(): Pattern to remove has no id")
|
|
||||||
|
|
||||||
self.app.push_screen(PatternDeleteScreen(patternId = selectedPatternId, showId = self.__showDescriptor.getId()), self.handle_remove_pattern)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_remove_pattern(self, pattern):
|
|
||||||
|
|
||||||
try:
|
|
||||||
row_key, col_key = self.patternTable.coordinate_to_cell_key(self.patternTable.cursor_coordinate)
|
|
||||||
self.patternTable.remove_row(row_key)
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
# Create the DataTable widget
|
|
||||||
self.patternTable = DataTable(classes="five")
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_pattern = self.patternTable.add_column("Pattern", width=150)
|
|
||||||
|
|
||||||
self.patternTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
self.shiftedSeasonsTable = DataTable(classes="five")
|
|
||||||
|
|
||||||
self.column_key_original_season = self.shiftedSeasonsTable.add_column("Original Season", width=30)
|
|
||||||
self.column_key_first_episode = self.shiftedSeasonsTable.add_column("First Episode", width=30)
|
|
||||||
self.column_key_last_episode = self.shiftedSeasonsTable.add_column("Last Episode", width=30)
|
|
||||||
self.column_key_season_offset = self.shiftedSeasonsTable.add_column("Season Offset", width=30)
|
|
||||||
self.column_key_episode_offset = self.shiftedSeasonsTable.add_column("Episode Offset", width=30)
|
|
||||||
|
|
||||||
self.shiftedSeasonsTable.cursor_type = 'row'
|
|
||||||
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
# 1
|
|
||||||
yield Static("Show" if not self.__showDescriptor is None else "New Show", id="toplabel")
|
|
||||||
yield Button("Identify", id="identify_button")
|
|
||||||
yield Static(" ", classes="three")
|
|
||||||
|
|
||||||
# 2
|
|
||||||
yield Static("ID")
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
yield Static("", id="id_static", classes="four")
|
|
||||||
else:
|
|
||||||
yield Input(type="integer", id="id_input", classes="four")
|
|
||||||
|
|
||||||
# 3
|
|
||||||
yield Static("Name")
|
|
||||||
yield Input(type="text", id="name_input", classes="four")
|
|
||||||
|
|
||||||
# 4
|
|
||||||
yield Static("Year")
|
|
||||||
yield Input(type="integer", id="year_input", classes="four")
|
|
||||||
|
|
||||||
#5
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
#6
|
|
||||||
yield Static("Index Season Digits")
|
|
||||||
yield Input(type="integer", id="index_season_digits_input", classes="four")
|
|
||||||
|
|
||||||
#7
|
|
||||||
yield Static("Index Episode Digits")
|
|
||||||
yield Input(type="integer", id="index_episode_digits_input", classes="four")
|
|
||||||
|
|
||||||
#8
|
|
||||||
yield Static("Indicator Season Digits")
|
|
||||||
yield Input(type="integer", id="indicator_season_digits_input", classes="four")
|
|
||||||
|
|
||||||
#9
|
|
||||||
yield Static("Indicator Edisode Digits")
|
|
||||||
yield Input(type="integer", id="indicator_episode_digits_input", classes="four")
|
|
||||||
|
|
||||||
# 10
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 11
|
|
||||||
yield Static("Shifted seasons", classes="two")
|
|
||||||
|
|
||||||
if self.__showDescriptor is not None:
|
|
||||||
yield Button("Add", id="button_add_shifted_season")
|
|
||||||
yield Button("Edit", id="button_edit_shifted_season")
|
|
||||||
yield Button("Delete", id="button_delete_shifted_season")
|
|
||||||
else:
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
yield Static(" ")
|
|
||||||
|
|
||||||
# 12
|
|
||||||
yield self.shiftedSeasonsTable
|
|
||||||
|
|
||||||
# 13
|
|
||||||
yield Static("File patterns", classes="five")
|
|
||||||
# 14
|
|
||||||
yield self.patternTable
|
|
||||||
|
|
||||||
# 15
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 16
|
|
||||||
yield Button("Save", id="save_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
def getShowDescriptorFromInput(self) -> ShowDescriptor:
|
|
||||||
|
|
||||||
kwargs = {ShowDescriptor.CONTEXT_KEY: self.context}
|
|
||||||
|
|
||||||
try:
|
|
||||||
if self.__showDescriptor:
|
|
||||||
kwargs[ShowDescriptor.ID_KEY] = int(self.__showDescriptor.getId())
|
|
||||||
else:
|
|
||||||
kwargs[ShowDescriptor.ID_KEY] = int(self.query_one("#id_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.NAME_KEY] = str(self.query_one("#name_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.YEAR_KEY] = int(self.query_one("#year_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.INDEX_SEASON_DIGITS_KEY] = int(self.query_one("#index_season_digits_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.INDEX_EPISODE_DIGITS_KEY] = int(self.query_one("#index_episode_digits_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY] = int(self.query_one("#indicator_season_digits_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
kwargs[ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY] = int(self.query_one("#indicator_episode_digits_input", Input).value)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return ShowDescriptor(**kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "save_button":
|
|
||||||
|
|
||||||
showDescriptor = self.getShowDescriptorFromInput()
|
|
||||||
|
|
||||||
if not showDescriptor is None:
|
|
||||||
if self.__sc.updateShow(showDescriptor):
|
|
||||||
self.dismiss(showDescriptor)
|
|
||||||
else:
|
|
||||||
#TODO: Meldung
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "identify_button":
|
|
||||||
|
|
||||||
showDescriptor = self.getShowDescriptorFromInput()
|
|
||||||
if not showDescriptor is None:
|
|
||||||
showName, showYear = self.__tc.getShowNameAndYear(showDescriptor.getId())
|
|
||||||
|
|
||||||
self.query_one("#name_input", Input).value = filterFilename(showName)
|
|
||||||
self.query_one("#year_input", Input).value = str(showYear)
|
|
||||||
|
|
||||||
|
|
||||||
if event.button.id == "button_add_shifted_season":
|
|
||||||
if not self.__showDescriptor is None:
|
|
||||||
self.app.push_screen(ShiftedSeasonDetailsScreen(showId = self.__showDescriptor.getId()), self.handle_update_shifted_season)
|
|
||||||
|
|
||||||
if event.button.id == "button_edit_shifted_season":
|
|
||||||
selectedShiftedSeasonObj = self.getSelectedShiftedSeasonObjFromInput()
|
|
||||||
if 'id' in selectedShiftedSeasonObj.keys():
|
|
||||||
self.app.push_screen(ShiftedSeasonDetailsScreen(showId = self.__showDescriptor.getId(), shiftedSeasonId=selectedShiftedSeasonObj['id']), self.handle_update_shifted_season)
|
|
||||||
|
|
||||||
if event.button.id == "button_delete_shifted_season":
|
|
||||||
selectedShiftedSeasonObj = self.getSelectedShiftedSeasonObjFromInput()
|
|
||||||
if 'id' in selectedShiftedSeasonObj.keys():
|
|
||||||
self.app.push_screen(ShiftedSeasonDeleteScreen(showId = self.__showDescriptor.getId(), shiftedSeasonId=selectedShiftedSeasonObj['id']), self.handle_delete_shifted_season)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_update_shifted_season(self, screenResult):
|
|
||||||
self.updateShiftedSeasons()
|
|
||||||
|
|
||||||
def handle_delete_shifted_season(self, screenResult):
|
|
||||||
self.updateShiftedSeasons()
|
|
||||||
@@ -1,168 +0,0 @@
|
|||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, DataTable
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from .show_controller import ShowController
|
|
||||||
|
|
||||||
from .show_details_screen import ShowDetailsScreen
|
|
||||||
from .show_delete_screen import ShowDeleteScreen
|
|
||||||
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
from textual.widgets._data_table import CellDoesNotExist
|
|
||||||
|
|
||||||
|
|
||||||
class ShowsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 1;
|
|
||||||
grid-rows: 2 auto;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#top {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
#two {
|
|
||||||
column-span: 2;
|
|
||||||
row-span: 2;
|
|
||||||
tint: magenta 40%;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
BINDINGS = [
|
|
||||||
("e", "edit_show", "Edit Show"),
|
|
||||||
("n", "new_show", "New Show"),
|
|
||||||
("d", "delete_show", "Delete Show"),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__sc = ShowController(context = self.context)
|
|
||||||
|
|
||||||
|
|
||||||
def getSelectedShowId(self):
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Fetch the currently selected row when 'Enter' is pressed
|
|
||||||
#selected_row_index = self.table.cursor_row
|
|
||||||
row_key, col_key = self.table.coordinate_to_cell_key(self.table.cursor_coordinate)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_row_data = self.table.get_row(row_key)
|
|
||||||
|
|
||||||
return selected_row_data[0]
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def action_new_show(self):
|
|
||||||
self.app.push_screen(ShowDetailsScreen(), self.handle_new_screen)
|
|
||||||
|
|
||||||
def handle_new_screen(self, screenResult):
|
|
||||||
|
|
||||||
show = (screenResult['id'], screenResult['name'], screenResult['year'])
|
|
||||||
self.table.add_row(*map(str, show))
|
|
||||||
|
|
||||||
|
|
||||||
def action_edit_show(self):
|
|
||||||
|
|
||||||
selectedShowId = self.getSelectedShowId()
|
|
||||||
|
|
||||||
if selectedShowId is not None:
|
|
||||||
self.app.push_screen(ShowDetailsScreen(showId = selectedShowId), self.handle_edit_screen)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_edit_screen(self, showDescriptor: ShowDescriptor):
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
row_key, col_key = self.table.coordinate_to_cell_key(self.table.cursor_coordinate)
|
|
||||||
|
|
||||||
self.table.update_cell(row_key, self.column_key_name, showDescriptor.getName())
|
|
||||||
self.table.update_cell(row_key, self.column_key_year, showDescriptor.getYear())
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def action_delete_show(self):
|
|
||||||
|
|
||||||
selectedShowId = self.getSelectedShowId()
|
|
||||||
|
|
||||||
if selectedShowId is not None:
|
|
||||||
self.app.push_screen(ShowDeleteScreen(showId = selectedShowId), self.handle_delete_show)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_delete_show(self, showDescriptor: ShowDescriptor):
|
|
||||||
|
|
||||||
try:
|
|
||||||
row_key, col_key = self.table.coordinate_to_cell_key(self.table.cursor_coordinate)
|
|
||||||
self.table.remove_row(row_key)
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self) -> None:
|
|
||||||
for show in self.__sc.getAllShows():
|
|
||||||
row = (int(show.id), show.name, show.year) # Convert each element to a string before adding
|
|
||||||
self.table.add_row(*map(str, row))
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
# Create the DataTable widget
|
|
||||||
self.table = DataTable()
|
|
||||||
|
|
||||||
# Define the columns with headers
|
|
||||||
self.column_key_id = self.table.add_column("ID", width=10)
|
|
||||||
self.column_key_name = self.table.add_column("Name", width=50)
|
|
||||||
self.column_key_year = self.table.add_column("Year", width=10)
|
|
||||||
|
|
||||||
self.table.cursor_type = 'row'
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
yield Static("Shows")
|
|
||||||
|
|
||||||
yield self.table
|
|
||||||
|
|
||||||
f = Footer()
|
|
||||||
f.description = "yolo"
|
|
||||||
|
|
||||||
yield f
|
|
||||||
@@ -1,202 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.model.track import Track
|
|
||||||
|
|
||||||
from ffx.model.media_tag import MediaTag
|
|
||||||
from ffx.model.track_tag import TrackTag
|
|
||||||
|
|
||||||
|
|
||||||
class TagController():
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
|
|
||||||
def updateMediaTag(self, patternId, tagKey, tagValue):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(MediaTag).filter(MediaTag.pattern_id == int(patternId),
|
|
||||||
MediaTag.key == str(tagKey))
|
|
||||||
tag = q.first()
|
|
||||||
if tag:
|
|
||||||
tag.value = str(tagValue)
|
|
||||||
else:
|
|
||||||
tag = MediaTag(pattern_id = int(patternId),
|
|
||||||
key = str(tagKey),
|
|
||||||
value = str(tagValue))
|
|
||||||
s.add(tag)
|
|
||||||
s.commit()
|
|
||||||
|
|
||||||
return int(tag.id)
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.updateTrackTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def updateTrackTag(self, trackId, tagKey, tagValue):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(TrackTag).filter(TrackTag.track_id == int(trackId),
|
|
||||||
TrackTag.key == str(tagKey))
|
|
||||||
tag = q.first()
|
|
||||||
if tag:
|
|
||||||
tag.value = str(tagValue)
|
|
||||||
else:
|
|
||||||
tag = TrackTag(track_id = int(trackId),
|
|
||||||
key = str(tagKey),
|
|
||||||
value = str(tagValue))
|
|
||||||
s.add(tag)
|
|
||||||
s.commit()
|
|
||||||
|
|
||||||
return int(tag.id)
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.updateTrackTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def deleteMediaTagByKey(self, patternId, tagKey):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
tag = s.query(MediaTag).filter(
|
|
||||||
MediaTag.pattern_id == int(patternId),
|
|
||||||
MediaTag.key == str(tagKey),
|
|
||||||
).first()
|
|
||||||
if tag is not None:
|
|
||||||
s.delete(tag)
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.deleteMediaTagByKey(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def deleteTrackTagByKey(self, trackId, tagKey):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(TrackTag).filter(TrackTag.track_id == int(trackId),
|
|
||||||
TrackTag.key == str(tagKey))
|
|
||||||
tag = q.first()
|
|
||||||
if tag:
|
|
||||||
s.delete(tag)
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.deleteTrackTagByKey(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def findAllMediaTags(self, patternId) -> dict:
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
tags = s.query(MediaTag).filter(MediaTag.pattern_id == int(patternId)).all()
|
|
||||||
return {t.key:t.value for t in tags}
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.findAllMediaTags(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def findAllTrackTags(self, trackId) -> dict:
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
tags = s.query(TrackTag).filter(TrackTag.track_id == int(trackId)).all()
|
|
||||||
return {t.key:t.value for t in tags}
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.findAllTracks(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def findMediaTag(self, trackId : int, trackKey : str) -> MediaTag:
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
return s.query(Track).filter(MediaTag.track_id == int(trackId), MediaTag.key == str(trackKey)).first()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.findMediaTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def findTrackTag(self, trackId : int, tagKey : str) -> TrackTag:
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
return s.query(TrackTag).filter(
|
|
||||||
TrackTag.track_id == int(trackId),
|
|
||||||
TrackTag.key == str(tagKey),
|
|
||||||
).first()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.findTrackTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def deleteMediaTag(self, tagId) -> bool:
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
tag = s.query(MediaTag).filter(MediaTag.id == int(tagId)).first()
|
|
||||||
|
|
||||||
if tag is not None:
|
|
||||||
|
|
||||||
s.delete(tag)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.deleteMediaTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def deleteTrackTag(self, tagId : int) -> bool:
|
|
||||||
|
|
||||||
if type(tagId) is not int:
|
|
||||||
raise TypeError('TagController.deleteTrackTag(): Argument tagId is required to be of type int')
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
tag = s.query(TrackTag).filter(TrackTag.id == int(tagId)).first()
|
|
||||||
|
|
||||||
if tag is not None:
|
|
||||||
|
|
||||||
s.delete(tag)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TagController.deleteTrackTag(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
@@ -1,98 +0,0 @@
|
|||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class TagDeleteScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 4 9;
|
|
||||||
grid-rows: 2 2 2 2 2 2 2 2 2;
|
|
||||||
grid-columns: 30 30 30 30;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, key=None, value=None):
|
|
||||||
super().__init__()
|
|
||||||
self.__key = key
|
|
||||||
self.__value = value
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
self.query_one("#keylabel", Static).update(str(self.__key))
|
|
||||||
self.query_one("#valuelabel", Static).update(str(self.__value))
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
#1
|
|
||||||
yield Static(f"Are you sure to delete this tag ?", id="toplabel", classes="five")
|
|
||||||
|
|
||||||
#2
|
|
||||||
yield Static("Key")
|
|
||||||
yield Static(" ", id="keylabel", classes="four")
|
|
||||||
|
|
||||||
#3
|
|
||||||
yield Static("Value")
|
|
||||||
yield Static(" ", id="valuelabel", classes="four")
|
|
||||||
|
|
||||||
#4
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
#9
|
|
||||||
yield Button("Delete", id="delete_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "delete_button":
|
|
||||||
|
|
||||||
tag = (self.__key, self.__value)
|
|
||||||
self.dismiss(tag)
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
@@ -1,132 +0,0 @@
|
|||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, Input
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class TagDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 5 20;
|
|
||||||
grid-rows: 2 2 2 2 2 3 2 2 2 2 2 6 2 2 6 2 2 2 2 6;
|
|
||||||
grid-columns: 25 25 25 25 225;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
SelectionList {
|
|
||||||
border: none;
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
Select {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, key=None, value=None):
|
|
||||||
super().__init__()
|
|
||||||
self.__key = key
|
|
||||||
self.__value = value
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
if self.__key is not None:
|
|
||||||
self.query_one("#key_input", Input).value = str(self.__key)
|
|
||||||
|
|
||||||
if self.__value is not None:
|
|
||||||
self.query_one("#value_input", Input).value = str(self.__value)
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
# 8
|
|
||||||
yield Static("Key")
|
|
||||||
yield Input(id="key_input", classes="four")
|
|
||||||
|
|
||||||
yield Static("Value")
|
|
||||||
yield Input(id="value_input", classes="four")
|
|
||||||
|
|
||||||
# 17
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 18
|
|
||||||
yield Button("Save", id="save_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
# 19
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
# 20
|
|
||||||
yield Static(" ", classes="five", id="messagestatic")
|
|
||||||
|
|
||||||
yield Footer(id="footer")
|
|
||||||
|
|
||||||
|
|
||||||
def getTagFromInput(self):
|
|
||||||
|
|
||||||
tagKey = self.query_one("#key_input", Input).value
|
|
||||||
tagValue = self.query_one("#value_input", Input).value
|
|
||||||
|
|
||||||
return (tagKey, tagValue)
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
# Check if the button pressed is the one we are interested in
|
|
||||||
if event.button.id == "save_button":
|
|
||||||
self.dismiss(self.getTagFromInput())
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
@@ -1,135 +0,0 @@
|
|||||||
import os, requests, time
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
from .logging_utils import get_ffx_logger
|
|
||||||
|
|
||||||
|
|
||||||
class TMDB_REQUEST_EXCEPTION(Exception):
|
|
||||||
def __init__(self, statusCode, statusMessage):
|
|
||||||
errorMessage = f"TMDB query failed with status code {statusCode}: {statusMessage}"
|
|
||||||
super().__init__(errorMessage)
|
|
||||||
|
|
||||||
class TMDB_API_KEY_NOT_PRESENT_EXCEPTION(Exception):
|
|
||||||
def __str__(self):
|
|
||||||
return 'TMDB api key is not available, please set environment variable TMDB_API_KEY'
|
|
||||||
|
|
||||||
class TMDB_EXCESSIVE_USAGE_EXCEPTION(Exception):
|
|
||||||
def __str__(self):
|
|
||||||
return 'Rate limit was triggered too often'
|
|
||||||
|
|
||||||
|
|
||||||
class TmdbController():
|
|
||||||
|
|
||||||
DEFAULT_LANGUAGE = 'de-DE'
|
|
||||||
|
|
||||||
RATE_LIMIT_WAIT_SECONDS = 10
|
|
||||||
RATE_LIMIT_RETRIES = 3
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self.__context = context
|
|
||||||
|
|
||||||
if context is None:
|
|
||||||
self.__logger = get_ffx_logger()
|
|
||||||
else:
|
|
||||||
self.__logger = context['logger']
|
|
||||||
|
|
||||||
self.__tmdbApiKey = os.environ.get('TMDB_API_KEY', None)
|
|
||||||
if self.__tmdbApiKey is None:
|
|
||||||
raise TMDB_API_KEY_NOT_PRESENT_EXCEPTION
|
|
||||||
|
|
||||||
self.tmdbLanguage = TmdbController.DEFAULT_LANGUAGE
|
|
||||||
|
|
||||||
|
|
||||||
def getTmdbRequest(self, tmdbUrl):
|
|
||||||
retries = TmdbController.RATE_LIMIT_RETRIES
|
|
||||||
while True:
|
|
||||||
response = requests.get(tmdbUrl)
|
|
||||||
if response.status_code == 429:
|
|
||||||
if not retries:
|
|
||||||
raise TMDB_EXCESSIVE_USAGE_EXCEPTION()
|
|
||||||
self.__logger.warning('TMDB Rate limit (status_code 429)')
|
|
||||||
time.sleep(TmdbController.RATE_LIMIT_WAIT_SECONDS)
|
|
||||||
retries -= 1
|
|
||||||
else:
|
|
||||||
jsonResult = response.json()
|
|
||||||
if ('success' in jsonResult.keys()
|
|
||||||
and not jsonResult['success']):
|
|
||||||
raise TMDB_REQUEST_EXCEPTION(jsonResult['status_code'], jsonResult['status_message'])
|
|
||||||
return jsonResult
|
|
||||||
|
|
||||||
|
|
||||||
def queryShow(self, showId):
|
|
||||||
"""
|
|
||||||
First level keys in the response object:
|
|
||||||
adult bool
|
|
||||||
backdrop_path str
|
|
||||||
created_by []
|
|
||||||
episode_run_time []
|
|
||||||
first_air_date str YYYY-MM-DD
|
|
||||||
genres []
|
|
||||||
homepage str
|
|
||||||
id int
|
|
||||||
in_production bool
|
|
||||||
languages []
|
|
||||||
last_air_date str YYYY-MM-DD
|
|
||||||
last_episode_to_air {}
|
|
||||||
name str
|
|
||||||
next_episode_to_air null
|
|
||||||
networks []
|
|
||||||
number_of_episodes int
|
|
||||||
number_of_seasons int
|
|
||||||
origin_country []
|
|
||||||
original_language str
|
|
||||||
original_name str
|
|
||||||
overview str
|
|
||||||
popularity float
|
|
||||||
poster_path str
|
|
||||||
production_companies []
|
|
||||||
production_countries []
|
|
||||||
seasons []
|
|
||||||
spoken_languages []
|
|
||||||
status str
|
|
||||||
tagline str
|
|
||||||
type str
|
|
||||||
vote_average float
|
|
||||||
vote_count int
|
|
||||||
"""
|
|
||||||
|
|
||||||
urlParams = f"?language={self.tmdbLanguage}&api_key={self.__tmdbApiKey}"
|
|
||||||
|
|
||||||
tmdbUrl = f"https://api.themoviedb.org/3/tv/{showId}{urlParams}"
|
|
||||||
|
|
||||||
return self.getTmdbRequest(tmdbUrl)
|
|
||||||
|
|
||||||
|
|
||||||
def getShowNameAndYear(self, showId: int):
|
|
||||||
|
|
||||||
showResult = self.queryShow(int(showId))
|
|
||||||
firstAirDate = datetime.strptime(showResult['first_air_date'], '%Y-%m-%d')
|
|
||||||
|
|
||||||
return str(showResult['name']), int(firstAirDate.year)
|
|
||||||
|
|
||||||
|
|
||||||
def queryEpisode(self, showId, season, episode):
|
|
||||||
"""
|
|
||||||
First level keys in the response object:
|
|
||||||
air_date str 'YYY-MM-DD'
|
|
||||||
crew []
|
|
||||||
episode_number int
|
|
||||||
guest_stars []
|
|
||||||
name str
|
|
||||||
overview str
|
|
||||||
id int
|
|
||||||
production_code
|
|
||||||
runtime int
|
|
||||||
season_number int
|
|
||||||
still_path str '/filename.jpg'
|
|
||||||
vote_average float
|
|
||||||
vote_count int
|
|
||||||
"""
|
|
||||||
|
|
||||||
urlParams = f"?language={self.tmdbLanguage}&api_key={self.__tmdbApiKey}"
|
|
||||||
|
|
||||||
tmdbUrl = f"https://api.themoviedb.org/3/tv/{showId}/season/{season}/episode/{episode}{urlParams}"
|
|
||||||
|
|
||||||
return self.getTmdbRequest(tmdbUrl)
|
|
||||||
@@ -1,58 +0,0 @@
|
|||||||
from enum import Enum
|
|
||||||
|
|
||||||
|
|
||||||
class TrackCodec(Enum):
|
|
||||||
|
|
||||||
H265 = {'identifier': 'hevc', 'format': 'h265', 'extension': 'h265' ,'label': 'H.265'}
|
|
||||||
H264 = {'identifier': 'h264', 'format': 'h264', 'extension': 'h264' ,'label': 'H.264'}
|
|
||||||
MPEG4 = {'identifier': 'mpeg4', 'format': 'm4v', 'extension': 'm4v' ,'label': 'MPEG-4'}
|
|
||||||
MPEG2 = {'identifier': 'mpeg2video', 'format': 'mpeg2video', 'extension': 'mpg' ,'label': 'MPEG-2'}
|
|
||||||
|
|
||||||
AAC = {'identifier': 'aac', 'format': None, 'extension': 'aac' , 'label': 'AAC'}
|
|
||||||
AC3 = {'identifier': 'ac3', 'format': 'ac3', 'extension': 'ac3' , 'label': 'AC3'}
|
|
||||||
EAC3 = {'identifier': 'eac3', 'format': 'eac3', 'extension': 'eac3' , 'label': 'EAC3'}
|
|
||||||
DTS = {'identifier': 'dts', 'format': 'dts', 'extension': 'dts' , 'label': 'DTS'}
|
|
||||||
MP3 = {'identifier': 'mp3', 'format': 'mp3', 'extension': 'mp3' , 'label': 'MP3'}
|
|
||||||
|
|
||||||
SRT = {'identifier': 'subrip', 'format': 'srt', 'extension': 'srt' , 'label': 'SRT'}
|
|
||||||
ASS = {'identifier': 'ass', 'format': 'ass', 'extension': 'ass' , 'label': 'ASS'}
|
|
||||||
TTF = {'identifier': 'ttf', 'format': None, 'extension': 'ttf' , 'label': 'TTF'}
|
|
||||||
PGS = {'identifier': 'hdmv_pgs_subtitle', 'format': 'sup', 'extension': 'sup' , 'label': 'PGS'}
|
|
||||||
VOBSUB = {'identifier': 'dvd_subtitle', 'format': None, 'extension': 'mkv' , 'label': 'VobSub'}
|
|
||||||
|
|
||||||
PNG = {'identifier': 'png', 'format': None, 'extension': 'png' , 'label': 'PNG'}
|
|
||||||
|
|
||||||
UNKNOWN = {'identifier': 'unknown', 'format': None, 'extension': None, 'label': 'UNKNOWN'}
|
|
||||||
|
|
||||||
|
|
||||||
def identifier(self):
|
|
||||||
"""Returns the codec identifier"""
|
|
||||||
return str(self.value['identifier'])
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
"""Returns the codec as string"""
|
|
||||||
return str(self.value['label'])
|
|
||||||
|
|
||||||
def format(self):
|
|
||||||
"""Returns the codec """
|
|
||||||
return self.value['format']
|
|
||||||
|
|
||||||
def extension(self):
|
|
||||||
"""Returns the corresponding extension"""
|
|
||||||
return str(self.value['extension'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def identify(identifier: str):
|
|
||||||
clist = [c for c in TrackCodec if c.value['identifier'] == str(identifier)]
|
|
||||||
if clist:
|
|
||||||
return clist[0]
|
|
||||||
else:
|
|
||||||
return TrackCodec.UNKNOWN
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromLabel(label: str):
|
|
||||||
clist = [c for c in TrackCodec if c.value['identifier'] == str(label)]
|
|
||||||
if clist:
|
|
||||||
return clist[0]
|
|
||||||
else:
|
|
||||||
return TrackCodec.UNKNOWN
|
|
||||||
@@ -1,278 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from ffx.model.track import Track
|
|
||||||
|
|
||||||
from .track_type import TrackType
|
|
||||||
|
|
||||||
from .track_disposition import TrackDisposition
|
|
||||||
|
|
||||||
from .track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.model.track_tag import TrackTag
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
class TrackController():
|
|
||||||
|
|
||||||
def __init__(self, context):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
self.Session = self.context['database']['session'] # convenience
|
|
||||||
|
|
||||||
self.__configurationData = self.context['config'].getData()
|
|
||||||
|
|
||||||
metadataConfiguration = self.__configurationData['metadata'] if 'metadata' in self.__configurationData.keys() else {}
|
|
||||||
|
|
||||||
self.__signatureTags = metadataConfiguration['signature'] if 'signature' in metadataConfiguration.keys() else {}
|
|
||||||
self.__removeGlobalKeys = metadataConfiguration['remove'] if 'remove' in metadataConfiguration.keys() else []
|
|
||||||
self.__ignoreGlobalKeys = metadataConfiguration['ignore'] if 'ignore' in metadataConfiguration.keys() else []
|
|
||||||
self.__removeTrackKeys = (metadataConfiguration['streams']['remove']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'remove' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
self.__ignoreTrackKeys = (metadataConfiguration['streams']['ignore']
|
|
||||||
if 'streams' in metadataConfiguration.keys()
|
|
||||||
and 'ignore' in metadataConfiguration['streams'].keys() else [])
|
|
||||||
|
|
||||||
|
|
||||||
def addTrack(self, trackDescriptor : TrackDescriptor, patternId = None):
|
|
||||||
|
|
||||||
# option to override pattern id in case track descriptor has not set it
|
|
||||||
patId = int(trackDescriptor.getPatternId() if patternId is None else patternId)
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
track = Track(pattern_id = patId,
|
|
||||||
track_type = int(trackDescriptor.getType().index()),
|
|
||||||
codec_name = str(trackDescriptor.getCodec().identifier()),
|
|
||||||
index = int(trackDescriptor.getIndex()),
|
|
||||||
source_index = int(trackDescriptor.getSourceIndex()),
|
|
||||||
disposition_flags = int(TrackDisposition.toFlags(trackDescriptor.getDispositionSet())),
|
|
||||||
audio_layout = trackDescriptor.getAudioLayout().index())
|
|
||||||
|
|
||||||
s.add(track)
|
|
||||||
s.commit()
|
|
||||||
|
|
||||||
for k,v in trackDescriptor.getTags().items():
|
|
||||||
|
|
||||||
# Filter tags that make no sense to preserve
|
|
||||||
if k not in self.__ignoreTrackKeys and k not in self.__removeTrackKeys:
|
|
||||||
tag = TrackTag(track_id = track.id,
|
|
||||||
key = k,
|
|
||||||
value = v)
|
|
||||||
s.add(tag)
|
|
||||||
s.commit()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.addTrack(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def updateTrack(self, trackId, trackDescriptor : TrackDescriptor):
|
|
||||||
|
|
||||||
if type(trackDescriptor) is not TrackDescriptor:
|
|
||||||
raise TypeError('TrackController.updateTrack(): Argument trackDescriptor is required to be of type TrackDescriptor')
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
track = s.query(Track).filter(Track.id == int(trackId)).first()
|
|
||||||
|
|
||||||
if track is not None:
|
|
||||||
|
|
||||||
track.index = int(trackDescriptor.getIndex())
|
|
||||||
|
|
||||||
track.track_type = int(trackDescriptor.getType().index())
|
|
||||||
track.codec_name = str(trackDescriptor.getCodec().identifier())
|
|
||||||
track.audio_layout = int(trackDescriptor.getAudioLayout().index())
|
|
||||||
|
|
||||||
track.disposition_flags = int(TrackDisposition.toFlags(trackDescriptor.getDispositionSet()))
|
|
||||||
|
|
||||||
descriptorTagKeys = trackDescriptor.getTags()
|
|
||||||
tagKeysInDescriptor = set(descriptorTagKeys.keys())
|
|
||||||
tagKeysInDb = {t.key for t in track.track_tags}
|
|
||||||
|
|
||||||
for k in tagKeysInDescriptor & tagKeysInDb: # to update
|
|
||||||
tags = [t for t in track.track_tags if t.key == k]
|
|
||||||
tags[0].value = descriptorTagKeys[k]
|
|
||||||
for k in tagKeysInDescriptor - tagKeysInDb: # to add
|
|
||||||
tag = TrackTag(track_id=track.id, key=k, value=descriptorTagKeys[k])
|
|
||||||
s.add(tag)
|
|
||||||
for k in tagKeysInDb - tagKeysInDescriptor: # to remove
|
|
||||||
tags = [t for t in track.track_tags if t.key == k]
|
|
||||||
s.delete(tags[0])
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.updateTrack(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def findTracks(self, patternId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(Track).filter(Track.pattern_id == int(patternId))
|
|
||||||
return sorted([t for t in q.all()], key=lambda d: d.getIndex())
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.findTracks(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def findSiblingDescriptors(self, patternId):
|
|
||||||
"""Finds all stored tracks related to a pattern, packs them in descriptors
|
|
||||||
and also setting sub indices and returns list of descriptors"""
|
|
||||||
|
|
||||||
siblingTracks = self.findTracks(patternId)
|
|
||||||
siblingDescriptors = []
|
|
||||||
|
|
||||||
subIndexCounter = {}
|
|
||||||
st: Track
|
|
||||||
for st in siblingTracks:
|
|
||||||
trackType = st.getType()
|
|
||||||
|
|
||||||
if not trackType in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
siblingDescriptors.append(st.getDescriptor(subIndex=subIndexCounter[trackType]))
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
return siblingDescriptors
|
|
||||||
|
|
||||||
|
|
||||||
#TODO: mit optionalem Parameter lösen ^
|
|
||||||
def findVideoTracks(self, patternId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(Track).filter(Track.pattern_id == int(patternId), Track.track_type == TrackType.VIDEO.index())
|
|
||||||
return [a for a in q.all()]
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.findVideoTracks(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def findAudioTracks(self, patternId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(Track).filter(Track.pattern_id == int(patternId), Track.track_type == TrackType.AUDIO.index())
|
|
||||||
return [a for a in q.all()]
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.findAudioTracks(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def findSubtitleTracks(self, patternId):
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
q = s.query(Track).filter(Track.pattern_id == int(patternId), Track.track_type == TrackType.SUBTITLE.index())
|
|
||||||
return [s for s in q.all()]
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.findSubtitleTracks(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
def getTrack(self, patternId : int, index: int) -> Track:
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
return s.query(Track).filter(
|
|
||||||
Track.pattern_id == int(patternId),
|
|
||||||
Track.index == int(index),
|
|
||||||
).first()
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.getTrack(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def setDispositionState(self, patternId: int, index: int, disposition : TrackDisposition, state : bool):
|
|
||||||
|
|
||||||
if type(patternId) is not int:
|
|
||||||
raise TypeError('TrackController.setTrackDisposition(): Argument patternId is required to be of type int')
|
|
||||||
if type(index) is not int:
|
|
||||||
raise TypeError('TrackController.setTrackDisposition(): Argument index is required to be of type int')
|
|
||||||
if type(disposition) is not TrackDisposition:
|
|
||||||
raise TypeError('TrackController.setTrackDisposition(): Argument disposition is required to be of type TrackDisposition')
|
|
||||||
if type(state) is not bool:
|
|
||||||
raise TypeError('TrackController.setTrackDisposition(): Argument state is required to be of type bool')
|
|
||||||
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
track = s.query(Track).filter(Track.pattern_id == patternId, Track.index == index).first()
|
|
||||||
|
|
||||||
if track is not None:
|
|
||||||
|
|
||||||
if state:
|
|
||||||
track.setDisposition(disposition)
|
|
||||||
else:
|
|
||||||
track.resetDisposition(disposition)
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.updateTrack(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
def deleteTrack(self, trackId):
|
|
||||||
try:
|
|
||||||
s = self.Session()
|
|
||||||
|
|
||||||
track = s.query(Track).filter(Track.id == int(trackId)).first()
|
|
||||||
|
|
||||||
if track is not None:
|
|
||||||
patternId = int(track.pattern_id)
|
|
||||||
|
|
||||||
q_siblings = s.query(Track).filter(Track.pattern_id == patternId).order_by(Track.index)
|
|
||||||
siblingTracks = q_siblings.all()
|
|
||||||
|
|
||||||
if len(siblingTracks) <= 1:
|
|
||||||
raise click.ClickException(
|
|
||||||
f"Cannot delete the last track from pattern #{patternId}. Patterns must define at least one track."
|
|
||||||
)
|
|
||||||
|
|
||||||
index = 0
|
|
||||||
for track in siblingTracks:
|
|
||||||
|
|
||||||
if track.id == int(trackId):
|
|
||||||
s.delete(track)
|
|
||||||
else:
|
|
||||||
track.index = index
|
|
||||||
index += 1
|
|
||||||
|
|
||||||
s.commit()
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
raise click.ClickException(f"TrackController.deleteTrack(): {repr(ex)}")
|
|
||||||
finally:
|
|
||||||
s.close()
|
|
||||||
|
|
||||||
|
|
||||||
# def setDefaultSubTrack(self, trackType, subIndex):
|
|
||||||
# pass
|
|
||||||
#
|
|
||||||
# def setForcedSubTrack(self, trackType, subIndex):
|
|
||||||
# pass
|
|
||||||
@@ -1,115 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button
|
|
||||||
from textual.containers import Grid
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
# Screen[dict[int, str, int]]
|
|
||||||
class TrackDeleteScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 4 9;
|
|
||||||
grid-rows: 2 2 2 2 2 2 2 2 2;
|
|
||||||
grid-columns: 30 30 30 30;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, trackDescriptor : TrackDescriptor):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
if type(trackDescriptor) is not TrackDescriptor:
|
|
||||||
raise click.ClickException('TrackDeleteScreen.init(): trackDescriptor is required to be of type TrackDescriptor')
|
|
||||||
|
|
||||||
self.__trackDescriptor = trackDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
self.query_one("#subindexlabel", Static).update(str(self.__trackDescriptor.getSubIndex()))
|
|
||||||
self.query_one("#patternlabel", Static).update(str(self.__trackDescriptor.getPatternId()))
|
|
||||||
self.query_one("#languagelabel", Static).update(str(self.__trackDescriptor.getLanguage().label()))
|
|
||||||
self.query_one("#titlelabel", Static).update(str(str(self.__trackDescriptor.getTitle())))
|
|
||||||
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
#1
|
|
||||||
yield Static(f"Are you sure to delete the following {self.__trackDescriptor.getType().label()} track?", id="toplabel", classes="four")
|
|
||||||
|
|
||||||
#2
|
|
||||||
yield Static("sub index")
|
|
||||||
yield Static(" ", id="subindexlabel", classes="three")
|
|
||||||
|
|
||||||
#3
|
|
||||||
yield Static("from pattern")
|
|
||||||
yield Static(" ", id="patternlabel", classes="three")
|
|
||||||
|
|
||||||
#4
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
#5
|
|
||||||
yield Static("Language")
|
|
||||||
yield Static(" ", id="languagelabel", classes="three")
|
|
||||||
|
|
||||||
#6
|
|
||||||
yield Static("Title")
|
|
||||||
yield Static(" ", id="titlelabel", classes="three")
|
|
||||||
|
|
||||||
#7
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
#8
|
|
||||||
yield Static(" ", classes="four")
|
|
||||||
|
|
||||||
#9
|
|
||||||
yield Button("Delete", id="delete_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
# Event handler for button press
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "delete_button":
|
|
||||||
self.dismiss(self.__trackDescriptor)
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
@@ -1,345 +0,0 @@
|
|||||||
from typing import Self
|
|
||||||
|
|
||||||
from .iso_language import IsoLanguage
|
|
||||||
from .track_type import TrackType
|
|
||||||
from .audio_layout import AudioLayout
|
|
||||||
from .track_disposition import TrackDisposition
|
|
||||||
from .track_codec import TrackCodec
|
|
||||||
from .logging_utils import get_ffx_logger
|
|
||||||
|
|
||||||
# from .helper import dictDiff, setDiff
|
|
||||||
|
|
||||||
|
|
||||||
class TrackDescriptor:
|
|
||||||
|
|
||||||
CONTEXT_KEY = "context"
|
|
||||||
|
|
||||||
ID_KEY = "id"
|
|
||||||
INDEX_KEY = "index"
|
|
||||||
SOURCE_INDEX_KEY = "source_index"
|
|
||||||
SUB_INDEX_KEY = "sub_index"
|
|
||||||
PATTERN_ID_KEY = "pattern_id"
|
|
||||||
EXTERNAL_SOURCE_FILE_PATH_KEY = "external_source_file"
|
|
||||||
|
|
||||||
DISPOSITION_SET_KEY = "disposition_set"
|
|
||||||
TAGS_KEY = "tags"
|
|
||||||
|
|
||||||
TRACK_TYPE_KEY = "track_type"
|
|
||||||
CODEC_KEY = "codec_name"
|
|
||||||
AUDIO_LAYOUT_KEY = "audio_layout"
|
|
||||||
|
|
||||||
FFPROBE_INDEX_KEY = "index"
|
|
||||||
FFPROBE_DISPOSITION_KEY = "disposition"
|
|
||||||
FFPROBE_TAGS_KEY = "tags"
|
|
||||||
FFPROBE_CODEC_TYPE_KEY = "codec_type"
|
|
||||||
FFPROBE_CODEC_KEY = "codec_name"
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
|
|
||||||
if TrackDescriptor.CONTEXT_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.CONTEXT_KEY]) is not dict:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDescriptor.__init__(): Argument {TrackDescriptor.CONTEXT_KEY} is required to be of type dict"
|
|
||||||
)
|
|
||||||
self.__context = kwargs[TrackDescriptor.CONTEXT_KEY]
|
|
||||||
self.__logger = self.__context['logger']
|
|
||||||
else:
|
|
||||||
self.__context = {}
|
|
||||||
self.__logger = get_ffx_logger()
|
|
||||||
|
|
||||||
if TrackDescriptor.ID_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.ID_KEY]) is not int:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.ID_KEY} is required to be of type int"
|
|
||||||
)
|
|
||||||
self.__trackId = kwargs[TrackDescriptor.ID_KEY]
|
|
||||||
else:
|
|
||||||
self.__trackId = -1
|
|
||||||
|
|
||||||
if TrackDescriptor.PATTERN_ID_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.PATTERN_ID_KEY]) is not int:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.PATTERN_ID_KEY} is required to be of type int"
|
|
||||||
)
|
|
||||||
self.__patternId = kwargs[TrackDescriptor.PATTERN_ID_KEY]
|
|
||||||
else:
|
|
||||||
self.__patternId = -1
|
|
||||||
|
|
||||||
if TrackDescriptor.EXTERNAL_SOURCE_FILE_PATH_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.EXTERNAL_SOURCE_FILE_PATH_KEY]) is not str:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.EXTERNAL_SOURCE_FILE_PATH_KEY} is required to be of type str"
|
|
||||||
)
|
|
||||||
self.__externalSourceFilePath = kwargs[TrackDescriptor.EXTERNAL_SOURCE_FILE_PATH_KEY]
|
|
||||||
else:
|
|
||||||
self.__externalSourceFilePath = ''
|
|
||||||
|
|
||||||
if TrackDescriptor.INDEX_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.INDEX_KEY]) is not int:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.INDEX_KEY} is required to be of type int"
|
|
||||||
)
|
|
||||||
self.__index = kwargs[TrackDescriptor.INDEX_KEY]
|
|
||||||
else:
|
|
||||||
self.__index = -1
|
|
||||||
|
|
||||||
if (
|
|
||||||
TrackDescriptor.SOURCE_INDEX_KEY in kwargs.keys()
|
|
||||||
and type(kwargs[TrackDescriptor.SOURCE_INDEX_KEY]) is int
|
|
||||||
):
|
|
||||||
self.__sourceIndex = kwargs[TrackDescriptor.SOURCE_INDEX_KEY]
|
|
||||||
else:
|
|
||||||
self.__sourceIndex = self.__index
|
|
||||||
|
|
||||||
if TrackDescriptor.SUB_INDEX_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.SUB_INDEX_KEY]) is not int:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.SUB_INDEX_KEY} is required to be of type int"
|
|
||||||
)
|
|
||||||
self.__subIndex = kwargs[TrackDescriptor.SUB_INDEX_KEY]
|
|
||||||
else:
|
|
||||||
self.__subIndex = -1
|
|
||||||
|
|
||||||
if TrackDescriptor.TRACK_TYPE_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.TRACK_TYPE_KEY]) is not TrackType:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.TRACK_TYPE_KEY} is required to be of type TrackType"
|
|
||||||
)
|
|
||||||
self.__trackType = kwargs[TrackDescriptor.TRACK_TYPE_KEY]
|
|
||||||
else:
|
|
||||||
self.__trackType = TrackType.UNKNOWN
|
|
||||||
|
|
||||||
if TrackDescriptor.CODEC_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.CODEC_KEY]) is not TrackCodec:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.CODEC_KEY} is required to be of type TrackCodec"
|
|
||||||
)
|
|
||||||
self.__trackCodec = kwargs[TrackDescriptor.CODEC_KEY]
|
|
||||||
else:
|
|
||||||
self.__trackCodec = TrackCodec.UNKNOWN
|
|
||||||
|
|
||||||
if TrackDescriptor.TAGS_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.TAGS_KEY]) is not dict:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.TAGS_KEY} is required to be of type dict"
|
|
||||||
)
|
|
||||||
self.__trackTags = kwargs[TrackDescriptor.TAGS_KEY]
|
|
||||||
else:
|
|
||||||
self.__trackTags = {}
|
|
||||||
|
|
||||||
if TrackDescriptor.DISPOSITION_SET_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.DISPOSITION_SET_KEY]) is not set:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.DISPOSITION_SET_KEY} is required to be of type set"
|
|
||||||
)
|
|
||||||
for d in kwargs[TrackDescriptor.DISPOSITION_SET_KEY]:
|
|
||||||
if type(d) is not TrackDisposition:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): All elements of argument set {TrackDescriptor.DISPOSITION_SET_KEY} is required to be of type TrackDisposition"
|
|
||||||
)
|
|
||||||
self.__dispositionSet = kwargs[TrackDescriptor.DISPOSITION_SET_KEY]
|
|
||||||
else:
|
|
||||||
self.__dispositionSet = set()
|
|
||||||
|
|
||||||
if TrackDescriptor.AUDIO_LAYOUT_KEY in kwargs.keys():
|
|
||||||
if type(kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY]) is not AudioLayout:
|
|
||||||
raise TypeError(
|
|
||||||
f"TrackDesciptor.__init__(): Argument {TrackDescriptor.AUDIO_LAYOUT_KEY} is required to be of type AudioLayout"
|
|
||||||
)
|
|
||||||
self.__audioLayout = kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY]
|
|
||||||
else:
|
|
||||||
self.__audioLayout = AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def fromFfprobe(cls, streamObj, subIndex: int = -1):
|
|
||||||
"""Processes ffprobe stream data as array with elements according to the following example
|
|
||||||
{
|
|
||||||
"index": 4,
|
|
||||||
"codec_name": "hdmv_pgs_subtitle",
|
|
||||||
"codec_long_name": "HDMV Presentation Graphic Stream subtitles",
|
|
||||||
"codec_type": "subtitle",
|
|
||||||
"codec_tag_string": "[0][0][0][0]",
|
|
||||||
"codec_tag": "0x0000",
|
|
||||||
"r_frame_rate": "0/0",
|
|
||||||
"avg_frame_rate": "0/0",
|
|
||||||
"time_base": "1/1000",
|
|
||||||
"start_pts": 0,
|
|
||||||
"start_time": "0.000000",
|
|
||||||
"duration_ts": 1421035,
|
|
||||||
"duration": "1421.035000",
|
|
||||||
"disposition": {
|
|
||||||
"default": 1,
|
|
||||||
"dub": 0,
|
|
||||||
"original": 0,
|
|
||||||
"comment": 0,
|
|
||||||
"lyrics": 0,
|
|
||||||
"karaoke": 0,
|
|
||||||
"forced": 0,
|
|
||||||
"hearing_impaired": 0,
|
|
||||||
"visual_impaired": 0,
|
|
||||||
"clean_effects": 0,
|
|
||||||
"attached_pic": 0,
|
|
||||||
"timed_thumbnails": 0,
|
|
||||||
"non_diegetic": 0,
|
|
||||||
"captions": 0,
|
|
||||||
"descriptions": 0,
|
|
||||||
"metadata": 0,
|
|
||||||
"dependent": 0,
|
|
||||||
"still_image": 0
|
|
||||||
},
|
|
||||||
"tags": {
|
|
||||||
"language": "ger",
|
|
||||||
"title": "German Full"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
trackType = (
|
|
||||||
TrackType.fromLabel(streamObj["codec_type"])
|
|
||||||
if "codec_type" in streamObj.keys()
|
|
||||||
else TrackType.UNKNOWN
|
|
||||||
)
|
|
||||||
|
|
||||||
if trackType != TrackType.UNKNOWN:
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = (
|
|
||||||
int(streamObj[TrackDescriptor.FFPROBE_INDEX_KEY])
|
|
||||||
if TrackDescriptor.FFPROBE_INDEX_KEY in streamObj.keys()
|
|
||||||
else -1
|
|
||||||
)
|
|
||||||
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = kwargs[TrackDescriptor.INDEX_KEY]
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = subIndex
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = trackType
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.CODEC_KEY] = TrackCodec.identify(streamObj[TrackDescriptor.FFPROBE_CODEC_KEY])
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = (
|
|
||||||
{
|
|
||||||
t
|
|
||||||
for d in (
|
|
||||||
k
|
|
||||||
for (k, v) in streamObj[
|
|
||||||
TrackDescriptor.FFPROBE_DISPOSITION_KEY
|
|
||||||
].items()
|
|
||||||
if v
|
|
||||||
)
|
|
||||||
if (t := TrackDisposition.find(d)) is not None
|
|
||||||
}
|
|
||||||
if TrackDescriptor.FFPROBE_DISPOSITION_KEY in streamObj.keys()
|
|
||||||
else set()
|
|
||||||
)
|
|
||||||
kwargs[TrackDescriptor.TAGS_KEY] = (
|
|
||||||
streamObj[TrackDescriptor.FFPROBE_TAGS_KEY]
|
|
||||||
if TrackDescriptor.FFPROBE_TAGS_KEY in streamObj.keys()
|
|
||||||
else {}
|
|
||||||
)
|
|
||||||
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = (
|
|
||||||
AudioLayout.identify(streamObj)
|
|
||||||
if trackType == TrackType.AUDIO
|
|
||||||
else AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
)
|
|
||||||
|
|
||||||
return cls(**kwargs)
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
def getId(self):
|
|
||||||
return self.__trackId
|
|
||||||
|
|
||||||
def getPatternId(self):
|
|
||||||
return self.__patternId
|
|
||||||
|
|
||||||
def getIndex(self):
|
|
||||||
return self.__index
|
|
||||||
|
|
||||||
def setIndex(self, index):
|
|
||||||
self.__index = index
|
|
||||||
|
|
||||||
def getSourceIndex(self):
|
|
||||||
return self.__sourceIndex
|
|
||||||
|
|
||||||
def setSourceIndex(self, sourceIndex: int):
|
|
||||||
self.__sourceIndex = int(sourceIndex)
|
|
||||||
|
|
||||||
def getSubIndex(self):
|
|
||||||
return self.__subIndex
|
|
||||||
|
|
||||||
def setSubIndex(self, subIndex):
|
|
||||||
self.__subIndex = subIndex
|
|
||||||
|
|
||||||
def getType(self):
|
|
||||||
return self.__trackType
|
|
||||||
|
|
||||||
def getCodec(self) -> TrackCodec:
|
|
||||||
return self.__trackCodec
|
|
||||||
|
|
||||||
def getLanguage(self):
|
|
||||||
if "language" in self.__trackTags.keys():
|
|
||||||
return IsoLanguage.findThreeLetter(self.__trackTags["language"])
|
|
||||||
else:
|
|
||||||
return IsoLanguage.UNDEFINED
|
|
||||||
|
|
||||||
def setLanguage(self, language: IsoLanguage):
|
|
||||||
if not type(language) is IsoLanguage:
|
|
||||||
raise TypeError('language has to be of type IsoLanguage')
|
|
||||||
self.__trackTags["language"] = language
|
|
||||||
|
|
||||||
def getTitle(self):
|
|
||||||
if "title" in self.__trackTags.keys():
|
|
||||||
return str(self.__trackTags["title"])
|
|
||||||
else:
|
|
||||||
return ""
|
|
||||||
|
|
||||||
def setTitle(self, title: str):
|
|
||||||
self.__trackTags["title"] = str(title)
|
|
||||||
|
|
||||||
|
|
||||||
def getAudioLayout(self):
|
|
||||||
return self.__audioLayout
|
|
||||||
|
|
||||||
def getTags(self):
|
|
||||||
return self.__trackTags
|
|
||||||
|
|
||||||
def getDispositionSet(self):
|
|
||||||
return self.__dispositionSet
|
|
||||||
|
|
||||||
def setDispositionSet(self, dispositionSet: set):
|
|
||||||
self.__dispositionSet = dispositionSet
|
|
||||||
|
|
||||||
def getDispositionFlag(self, disposition: TrackDisposition) -> bool:
|
|
||||||
return bool(disposition in self.__dispositionSet)
|
|
||||||
|
|
||||||
def setDispositionFlag(self, disposition: TrackDisposition, state: bool):
|
|
||||||
if state:
|
|
||||||
self.__dispositionSet.add(disposition)
|
|
||||||
else:
|
|
||||||
self.__dispositionSet.discard(disposition)
|
|
||||||
|
|
||||||
# def compare(self, vsTrackDescriptor: Self):
|
|
||||||
#
|
|
||||||
# compareResult = {}
|
|
||||||
#
|
|
||||||
# tagsDiffResult = dictKeysDiff(vsTrackDescriptor.getTags(), self.getTags())
|
|
||||||
#
|
|
||||||
# if tagsDiffResult:
|
|
||||||
# compareResult[TrackDescriptor.TAGS_KEY] = tagsDiffResult
|
|
||||||
#
|
|
||||||
# vsDispositions = vsTrackDescriptor.getDispositionSet()
|
|
||||||
# dispositions = self.getDispositionSet()
|
|
||||||
#
|
|
||||||
# dispositionDiffResult = setDiff(vsDispositions, dispositions)
|
|
||||||
#
|
|
||||||
# if dispositionDiffResult:
|
|
||||||
# compareResult[TrackDescriptor.DISPOSITION_SET_KEY] = dispositionDiffResult
|
|
||||||
#
|
|
||||||
# return compareResult
|
|
||||||
|
|
||||||
def setExternalSourceFilePath(self, filePath: str):
|
|
||||||
self.__externalSourceFilePath = str(filePath)
|
|
||||||
|
|
||||||
def getExternalSourceFilePath(self):
|
|
||||||
return self.__externalSourceFilePath
|
|
||||||
@@ -1,468 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Static, Button, SelectionList, Select, DataTable, Input
|
|
||||||
from textual.containers import Grid
|
|
||||||
from textual.widgets._data_table import CellDoesNotExist
|
|
||||||
|
|
||||||
from .audio_layout import AudioLayout
|
|
||||||
from .iso_language import IsoLanguage
|
|
||||||
from .tag_delete_screen import TagDeleteScreen
|
|
||||||
from .tag_details_screen import TagDetailsScreen
|
|
||||||
from .track_codec import TrackCodec
|
|
||||||
from .track_descriptor import TrackDescriptor
|
|
||||||
from .track_disposition import TrackDisposition
|
|
||||||
from .track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.helper import formatRichColor, removeRichColor
|
|
||||||
|
|
||||||
|
|
||||||
class TrackDetailsScreen(Screen):
|
|
||||||
|
|
||||||
CSS = """
|
|
||||||
|
|
||||||
Grid {
|
|
||||||
grid-size: 5 24;
|
|
||||||
grid-rows: 2 2 2 2 2 3 3 2 2 3 2 2 2 2 2 6 2 2 6 2 2 2;
|
|
||||||
grid-columns: 25 25 25 25 125;
|
|
||||||
height: 100%;
|
|
||||||
width: 100%;
|
|
||||||
padding: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
Input {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
Button {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
SelectionList {
|
|
||||||
border: none;
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
Select {
|
|
||||||
border: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable {
|
|
||||||
min-height: 6;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--cursor {
|
|
||||||
background: darkorange;
|
|
||||||
color: black;
|
|
||||||
}
|
|
||||||
|
|
||||||
DataTable .datatable--header {
|
|
||||||
background: steelblue;
|
|
||||||
color: white;
|
|
||||||
}
|
|
||||||
|
|
||||||
#toplabel {
|
|
||||||
height: 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
.two {
|
|
||||||
column-span: 2;
|
|
||||||
}
|
|
||||||
.three {
|
|
||||||
column-span: 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
.four {
|
|
||||||
column-span: 4;
|
|
||||||
}
|
|
||||||
.five {
|
|
||||||
column-span: 5;
|
|
||||||
}
|
|
||||||
|
|
||||||
.box {
|
|
||||||
height: 100%;
|
|
||||||
border: solid green;
|
|
||||||
}
|
|
||||||
|
|
||||||
.yellow {
|
|
||||||
tint: yellow 40%;
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
trackDescriptor: TrackDescriptor = None,
|
|
||||||
patternId=None,
|
|
||||||
patternLabel: str = "",
|
|
||||||
siblingTrackDescriptors=None,
|
|
||||||
trackType: TrackType = None,
|
|
||||||
index=None,
|
|
||||||
subIndex=None,
|
|
||||||
):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.context = self.app.getContext()
|
|
||||||
|
|
||||||
self.__configurationData = self.context["config"].getData()
|
|
||||||
|
|
||||||
metadataConfiguration = (
|
|
||||||
self.__configurationData["metadata"]
|
|
||||||
if "metadata" in self.__configurationData.keys()
|
|
||||||
else {}
|
|
||||||
)
|
|
||||||
|
|
||||||
self.__removeTrackKeys = (
|
|
||||||
metadataConfiguration["streams"]["remove"]
|
|
||||||
if "streams" in metadataConfiguration.keys()
|
|
||||||
and "remove" in metadataConfiguration["streams"].keys()
|
|
||||||
else []
|
|
||||||
)
|
|
||||||
self.__ignoreTrackKeys = (
|
|
||||||
metadataConfiguration["streams"]["ignore"]
|
|
||||||
if "streams" in metadataConfiguration.keys()
|
|
||||||
and "ignore" in metadataConfiguration["streams"].keys()
|
|
||||||
else []
|
|
||||||
)
|
|
||||||
|
|
||||||
self.__isNew = trackDescriptor is None
|
|
||||||
self.__trackDescriptor = trackDescriptor
|
|
||||||
self.__patternId = (
|
|
||||||
int(patternId)
|
|
||||||
if patternId is not None
|
|
||||||
else (
|
|
||||||
int(trackDescriptor.getPatternId())
|
|
||||||
if trackDescriptor is not None and trackDescriptor.getPatternId() != -1
|
|
||||||
else -1
|
|
||||||
)
|
|
||||||
)
|
|
||||||
self.__patternLabel = str(patternLabel)
|
|
||||||
self.__siblingTrackDescriptors = list(siblingTrackDescriptors or [])
|
|
||||||
|
|
||||||
if self.__isNew:
|
|
||||||
self.__trackType = trackType
|
|
||||||
self.__trackCodec = TrackCodec.UNKNOWN
|
|
||||||
self.__audioLayout = AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
self.__index = index
|
|
||||||
self.__subIndex = subIndex
|
|
||||||
self.__draftTrackTags = {}
|
|
||||||
else:
|
|
||||||
self.__trackType = trackDescriptor.getType()
|
|
||||||
self.__trackCodec = trackDescriptor.getCodec()
|
|
||||||
self.__audioLayout = trackDescriptor.getAudioLayout()
|
|
||||||
self.__index = trackDescriptor.getIndex()
|
|
||||||
self.__subIndex = trackDescriptor.getSubIndex()
|
|
||||||
self.__draftTrackTags = {
|
|
||||||
key: value
|
|
||||||
for key, value in trackDescriptor.getTags().items()
|
|
||||||
if key not in ("language", "title")
|
|
||||||
}
|
|
||||||
|
|
||||||
def _descriptor_refs_same_track(self, descriptor: TrackDescriptor) -> bool:
|
|
||||||
if self.__trackDescriptor is None:
|
|
||||||
return False
|
|
||||||
if descriptor.getId() != -1 and self.__trackDescriptor.getId() != -1:
|
|
||||||
return descriptor.getId() == self.__trackDescriptor.getId()
|
|
||||||
return (
|
|
||||||
descriptor.getPatternId() == self.__trackDescriptor.getPatternId()
|
|
||||||
and descriptor.getIndex() == self.__trackDescriptor.getIndex()
|
|
||||||
and descriptor.getSubIndex() == self.__trackDescriptor.getSubIndex()
|
|
||||||
)
|
|
||||||
|
|
||||||
def updateTags(self):
|
|
||||||
|
|
||||||
self.trackTagsTable.clear()
|
|
||||||
|
|
||||||
for key, value in self.__draftTrackTags.items():
|
|
||||||
textColor = None
|
|
||||||
if key in self.__ignoreTrackKeys:
|
|
||||||
textColor = "blue"
|
|
||||||
if key in self.__removeTrackKeys:
|
|
||||||
textColor = "red"
|
|
||||||
|
|
||||||
row = (formatRichColor(key, textColor), formatRichColor(value, textColor))
|
|
||||||
self.trackTagsTable.add_row(*map(str, row))
|
|
||||||
|
|
||||||
def on_mount(self):
|
|
||||||
|
|
||||||
self.query_one("#index_label", Static).update(
|
|
||||||
str(self.__index) if self.__index is not None else "-"
|
|
||||||
)
|
|
||||||
self.query_one("#subindex_label", Static).update(
|
|
||||||
str(self.__subIndex) if self.__subIndex is not None else "-"
|
|
||||||
)
|
|
||||||
self.query_one("#pattern_label", Static).update(self.__patternLabel)
|
|
||||||
|
|
||||||
if self.__trackType is not None:
|
|
||||||
self.query_one("#type_select", Select).value = self.__trackType.label()
|
|
||||||
|
|
||||||
self.query_one("#audio_layout_select", Select).value = self.__audioLayout.label()
|
|
||||||
|
|
||||||
for disposition in TrackDisposition:
|
|
||||||
|
|
||||||
dispositionIsSet = (
|
|
||||||
self.__trackDescriptor is not None
|
|
||||||
and disposition in self.__trackDescriptor.getDispositionSet()
|
|
||||||
)
|
|
||||||
|
|
||||||
dispositionOption = (
|
|
||||||
disposition.label(),
|
|
||||||
disposition.index(),
|
|
||||||
dispositionIsSet,
|
|
||||||
)
|
|
||||||
self.query_one("#dispositions_selection_list", SelectionList).add_option(
|
|
||||||
dispositionOption
|
|
||||||
)
|
|
||||||
|
|
||||||
if self.__trackDescriptor is not None:
|
|
||||||
self.query_one("#language_select", Select).value = (
|
|
||||||
self.__trackDescriptor.getLanguage().label()
|
|
||||||
)
|
|
||||||
self.query_one("#title_input", Input).value = self.__trackDescriptor.getTitle()
|
|
||||||
self.updateTags()
|
|
||||||
|
|
||||||
def compose(self):
|
|
||||||
|
|
||||||
self.trackTagsTable = DataTable(classes="five")
|
|
||||||
|
|
||||||
self.column_key_track_tag_key = self.trackTagsTable.add_column("Key", width=50)
|
|
||||||
self.column_key_track_tag_value = self.trackTagsTable.add_column("Value", width=100)
|
|
||||||
|
|
||||||
self.trackTagsTable.cursor_type = "row"
|
|
||||||
|
|
||||||
languages = [language.label() for language in IsoLanguage]
|
|
||||||
|
|
||||||
yield Header()
|
|
||||||
|
|
||||||
with Grid():
|
|
||||||
|
|
||||||
yield Static(
|
|
||||||
"New stream" if self.__isNew else "Edit stream",
|
|
||||||
id="toplabel",
|
|
||||||
classes="five",
|
|
||||||
)
|
|
||||||
|
|
||||||
yield Static("for pattern")
|
|
||||||
yield Static("", id="pattern_label", classes="four", markup=False)
|
|
||||||
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
yield Static("Index / Subindex")
|
|
||||||
yield Static("", id="index_label", classes="two")
|
|
||||||
yield Static("", id="subindex_label", classes="two")
|
|
||||||
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
yield Static("Type")
|
|
||||||
yield Select.from_values(
|
|
||||||
[trackType.label() for trackType in TrackType],
|
|
||||||
classes="four",
|
|
||||||
id="type_select",
|
|
||||||
)
|
|
||||||
|
|
||||||
yield Static("Audio Layout")
|
|
||||||
yield Select.from_values(
|
|
||||||
[layout.label() for layout in AudioLayout],
|
|
||||||
classes="four",
|
|
||||||
id="audio_layout_select",
|
|
||||||
)
|
|
||||||
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
yield Static("Language")
|
|
||||||
yield Select.from_values(languages, classes="four", id="language_select")
|
|
||||||
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
yield Static("Title")
|
|
||||||
yield Input(id="title_input", classes="four")
|
|
||||||
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
yield Static("Stream tags")
|
|
||||||
yield Static(" ")
|
|
||||||
yield Button("Add", id="button_add_stream_tag")
|
|
||||||
yield Button("Edit", id="button_edit_stream_tag")
|
|
||||||
yield Button("Delete", id="button_delete_stream_tag")
|
|
||||||
|
|
||||||
yield self.trackTagsTable
|
|
||||||
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
yield Static("Stream dispositions", classes="five")
|
|
||||||
|
|
||||||
yield SelectionList[int](
|
|
||||||
classes="five",
|
|
||||||
id="dispositions_selection_list",
|
|
||||||
)
|
|
||||||
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
yield Button("Save", id="save_button")
|
|
||||||
yield Button("Cancel", id="cancel_button")
|
|
||||||
|
|
||||||
yield Static(" ", classes="five")
|
|
||||||
|
|
||||||
yield Static(" ", classes="five", id="messagestatic")
|
|
||||||
|
|
||||||
yield Footer(id="footer")
|
|
||||||
|
|
||||||
def getTrackDescriptorFromInput(self):
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
kwargs[TrackDescriptor.CONTEXT_KEY] = self.context
|
|
||||||
|
|
||||||
if self.__trackDescriptor is not None and self.__trackDescriptor.getId() != -1:
|
|
||||||
kwargs[TrackDescriptor.ID_KEY] = self.__trackDescriptor.getId()
|
|
||||||
|
|
||||||
if self.__patternId != -1:
|
|
||||||
kwargs[TrackDescriptor.PATTERN_ID_KEY] = int(self.__patternId)
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = int(self.__index)
|
|
||||||
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = (
|
|
||||||
int(self.__trackDescriptor.getSourceIndex())
|
|
||||||
if self.__trackDescriptor is not None
|
|
||||||
else int(self.__index)
|
|
||||||
)
|
|
||||||
if self.__subIndex is not None and int(self.__subIndex) >= 0:
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = int(self.__subIndex)
|
|
||||||
|
|
||||||
selectedTrackType = TrackType.fromLabel(
|
|
||||||
self.query_one("#type_select", Select).value
|
|
||||||
)
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = selectedTrackType
|
|
||||||
kwargs[TrackDescriptor.CODEC_KEY] = self.__trackCodec
|
|
||||||
|
|
||||||
if selectedTrackType == TrackType.AUDIO:
|
|
||||||
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = AudioLayout.fromLabel(
|
|
||||||
self.query_one("#audio_layout_select", Select).value
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
kwargs[TrackDescriptor.AUDIO_LAYOUT_KEY] = AudioLayout.LAYOUT_UNDEFINED
|
|
||||||
|
|
||||||
trackTags = dict(self.__draftTrackTags)
|
|
||||||
|
|
||||||
language = self.query_one("#language_select", Select).value
|
|
||||||
if language:
|
|
||||||
trackTags["language"] = IsoLanguage.find(language).threeLetter()
|
|
||||||
|
|
||||||
title = self.query_one("#title_input", Input).value
|
|
||||||
if title:
|
|
||||||
trackTags["title"] = title
|
|
||||||
|
|
||||||
kwargs[TrackDescriptor.TAGS_KEY] = trackTags
|
|
||||||
|
|
||||||
dispositionFlags = sum(
|
|
||||||
[2 ** flag for flag in self.query_one("#dispositions_selection_list", SelectionList).selected]
|
|
||||||
)
|
|
||||||
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = TrackDisposition.toSet(
|
|
||||||
dispositionFlags
|
|
||||||
)
|
|
||||||
|
|
||||||
return TrackDescriptor(**kwargs)
|
|
||||||
|
|
||||||
def getSelectedTag(self):
|
|
||||||
|
|
||||||
try:
|
|
||||||
row_key, _ = self.trackTagsTable.coordinate_to_cell_key(
|
|
||||||
self.trackTagsTable.cursor_coordinate
|
|
||||||
)
|
|
||||||
|
|
||||||
if row_key is not None:
|
|
||||||
selected_tag_data = self.trackTagsTable.get_row(row_key)
|
|
||||||
|
|
||||||
tagKey = removeRichColor(selected_tag_data[0])
|
|
||||||
tagValue = removeRichColor(selected_tag_data[1])
|
|
||||||
|
|
||||||
return tagKey, tagValue
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
except CellDoesNotExist:
|
|
||||||
return None
|
|
||||||
|
|
||||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
|
||||||
|
|
||||||
if event.button.id == "save_button":
|
|
||||||
trackDescriptor = self.getTrackDescriptorFromInput()
|
|
||||||
|
|
||||||
siblingTrackList = [
|
|
||||||
descriptor
|
|
||||||
for descriptor in self.__siblingTrackDescriptors
|
|
||||||
if not self._descriptor_refs_same_track(descriptor)
|
|
||||||
]
|
|
||||||
siblingTrackList = [
|
|
||||||
descriptor
|
|
||||||
for descriptor in siblingTrackList
|
|
||||||
if descriptor.getType() == trackDescriptor.getType()
|
|
||||||
]
|
|
||||||
|
|
||||||
numDefaultTracks = len(
|
|
||||||
[
|
|
||||||
descriptor
|
|
||||||
for descriptor in siblingTrackList
|
|
||||||
if TrackDisposition.DEFAULT in descriptor.getDispositionSet()
|
|
||||||
]
|
|
||||||
)
|
|
||||||
numForcedTracks = len(
|
|
||||||
[
|
|
||||||
descriptor
|
|
||||||
for descriptor in siblingTrackList
|
|
||||||
if TrackDisposition.FORCED in descriptor.getDispositionSet()
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
if self.__isNew:
|
|
||||||
trackDescriptor.setSubIndex(len(siblingTrackList))
|
|
||||||
elif self.__subIndex is not None and int(self.__subIndex) >= 0:
|
|
||||||
trackDescriptor.setSubIndex(int(self.__subIndex))
|
|
||||||
|
|
||||||
if (
|
|
||||||
TrackDisposition.DEFAULT in trackDescriptor.getDispositionSet()
|
|
||||||
and numDefaultTracks
|
|
||||||
) or (
|
|
||||||
TrackDisposition.FORCED in trackDescriptor.getDispositionSet()
|
|
||||||
and numForcedTracks
|
|
||||||
):
|
|
||||||
|
|
||||||
self.query_one("#messagestatic", Static).update(
|
|
||||||
"Cannot add another stream with disposition flag 'default' or 'forced' set"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
self.query_one("#messagestatic", Static).update(" ")
|
|
||||||
self.dismiss(trackDescriptor)
|
|
||||||
|
|
||||||
if event.button.id == "cancel_button":
|
|
||||||
self.app.pop_screen()
|
|
||||||
|
|
||||||
if event.button.id == "button_add_stream_tag":
|
|
||||||
self.app.push_screen(TagDetailsScreen(), self.handle_update_tag)
|
|
||||||
|
|
||||||
if event.button.id == "button_edit_stream_tag":
|
|
||||||
selectedTag = self.getSelectedTag()
|
|
||||||
if selectedTag is not None:
|
|
||||||
self.app.push_screen(
|
|
||||||
TagDetailsScreen(key=selectedTag[0], value=selectedTag[1]),
|
|
||||||
self.handle_update_tag,
|
|
||||||
)
|
|
||||||
|
|
||||||
if event.button.id == "button_delete_stream_tag":
|
|
||||||
selectedTag = self.getSelectedTag()
|
|
||||||
if selectedTag is not None:
|
|
||||||
self.app.push_screen(
|
|
||||||
TagDeleteScreen(key=selectedTag[0], value=selectedTag[1]),
|
|
||||||
self.handle_delete_tag,
|
|
||||||
)
|
|
||||||
|
|
||||||
def handle_update_tag(self, tag):
|
|
||||||
if tag is None:
|
|
||||||
return
|
|
||||||
self.__draftTrackTags[str(tag[0])] = str(tag[1])
|
|
||||||
self.updateTags()
|
|
||||||
|
|
||||||
def handle_delete_tag(self, trackTag):
|
|
||||||
if trackTag is None:
|
|
||||||
return
|
|
||||||
self.__draftTrackTags.pop(str(trackTag[0]), None)
|
|
||||||
self.updateTags()
|
|
||||||
@@ -1,76 +0,0 @@
|
|||||||
import difflib, click
|
|
||||||
|
|
||||||
from enum import Enum
|
|
||||||
|
|
||||||
|
|
||||||
class TrackDisposition(Enum):
|
|
||||||
|
|
||||||
DEFAULT = {"name": "default", "index": 0, "indicator": "DEF"}
|
|
||||||
FORCED = {"name": "forced", "index": 1, "indicator": "FOR"}
|
|
||||||
|
|
||||||
DUB = {"name": "dub", "index": 2, "indicator": "DUB"}
|
|
||||||
ORIGINAL = {"name": "original", "index": 3, "indicator": "ORG"}
|
|
||||||
COMMENT = {"name": "comment", "index": 4, "indicator": "COM"}
|
|
||||||
LYRICS = {"name": "lyrics", "index": 5, "indicator": "LYR"}
|
|
||||||
KARAOKE = {"name": "karaoke", "index": 6, "indicator": "KAR"}
|
|
||||||
HEARING_IMPAIRED = {"name": "hearing_impaired", "index": 7, "indicator": "HIM"}
|
|
||||||
VISUAL_IMPAIRED = {"name": "visual_impaired", "index": 8, "indicator": "VIM"}
|
|
||||||
CLEAN_EFFECTS = {"name": "clean_effects", "index": 9, "indicator": "CLE"}
|
|
||||||
ATTACHED_PIC = {"name": "attached_pic", "index": 10, "indicator": "ATP"}
|
|
||||||
TIMED_THUMBNAILS = {"name": "timed_thumbnails", "index": 11, "indicator": "TTH"}
|
|
||||||
NON_DIEGETICS = {"name": "non_diegetic", "index": 12, "indicator": "NOD"}
|
|
||||||
CAPTIONS = {"name": "captions", "index": 13, "indicator": "CAP"}
|
|
||||||
DESCRIPTIONS = {"name": "descriptions", "index": 14, "indicator": "DES"}
|
|
||||||
METADATA = {"name": "metadata", "index": 15, "indicator": "MED"}
|
|
||||||
DEPENDENT = {"name": "dependent", "index": 16, "indicator": "DEP"}
|
|
||||||
STILL_IMAGE = {"name": "still_image", "index": 17, "indicator": "STI"}
|
|
||||||
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
return str(self.value['name'])
|
|
||||||
|
|
||||||
def index(self):
|
|
||||||
return int(self.value['index'])
|
|
||||||
|
|
||||||
def indicator(self):
|
|
||||||
return str(self.value['indicator'])
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def toFlags(dispositionSet):
|
|
||||||
"""Flags stored in integer bits (2**index)"""
|
|
||||||
|
|
||||||
if type(dispositionSet) is not set:
|
|
||||||
raise click.ClickException('TrackDisposition.toFlags(): Argument is not of type set')
|
|
||||||
|
|
||||||
flags = 0
|
|
||||||
for d in dispositionSet:
|
|
||||||
if type(d) is not TrackDisposition:
|
|
||||||
raise click.ClickException('TrackDisposition.toFlags(): Element not of type TrackDisposition')
|
|
||||||
flags += 2 ** d.index()
|
|
||||||
return flags
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def toSet(flags):
|
|
||||||
dispositionSet = set()
|
|
||||||
for d in TrackDisposition:
|
|
||||||
if flags & int(2 ** d.index()):
|
|
||||||
dispositionSet.add(d)
|
|
||||||
return dispositionSet
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def find(label):
|
|
||||||
matchingDispositions = [d for d in TrackDisposition if d.label() == str(label)]
|
|
||||||
if matchingDispositions:
|
|
||||||
return matchingDispositions[0]
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromIndicator(indicator: str):
|
|
||||||
matchingDispositions = [d for d in TrackDisposition if d.indicator() == str(indicator)]
|
|
||||||
if matchingDispositions:
|
|
||||||
return matchingDispositions[0]
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
@@ -1,39 +0,0 @@
|
|||||||
from enum import Enum
|
|
||||||
|
|
||||||
class TrackType(Enum):
|
|
||||||
|
|
||||||
VIDEO = {'label': 'video', 'index': 1}
|
|
||||||
AUDIO = {'label': 'audio', 'index': 2}
|
|
||||||
SUBTITLE = {'label': 'subtitle', 'index': 3}
|
|
||||||
ATTACHMENT = {'label': 'attachment', 'index': 4}
|
|
||||||
|
|
||||||
UNKNOWN = {'label': 'unknown', 'index': 0}
|
|
||||||
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
"""Returns the stream type as string"""
|
|
||||||
return str(self.value['label'])
|
|
||||||
|
|
||||||
def indicator(self):
|
|
||||||
"""Returns the stream type as single letter"""
|
|
||||||
return self.label()[0]
|
|
||||||
|
|
||||||
def index(self):
|
|
||||||
"""Returns the stream type index"""
|
|
||||||
return int(self.value['index'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromLabel(label : str):
|
|
||||||
tlist = [t for t in TrackType if t.value['label'] == str(label)]
|
|
||||||
if tlist:
|
|
||||||
return tlist[0]
|
|
||||||
else:
|
|
||||||
return TrackType.UNKNOWN
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromIndex(index : int):
|
|
||||||
tlist = [t for t in TrackType if t.value['index'] == int(index)]
|
|
||||||
if tlist:
|
|
||||||
return tlist[0]
|
|
||||||
else:
|
|
||||||
return TrackType.UNKNOWN
|
|
||||||
@@ -1,34 +0,0 @@
|
|||||||
from enum import Enum
|
|
||||||
|
|
||||||
class VideoEncoder(Enum):
|
|
||||||
|
|
||||||
AV1 = {'label': 'av1', 'index': 1}
|
|
||||||
VP9 = {'label': 'vp9', 'index': 2}
|
|
||||||
H264 = {'label': 'h264', 'index': 3}
|
|
||||||
COPY = {'label': 'copy', 'index': 4}
|
|
||||||
|
|
||||||
UNDEFINED = {'label': 'undefined', 'index': 0}
|
|
||||||
|
|
||||||
def label(self):
|
|
||||||
"""Returns the stream type as string"""
|
|
||||||
return str(self.value['label'])
|
|
||||||
|
|
||||||
def index(self):
|
|
||||||
"""Returns the stream type index"""
|
|
||||||
return int(self.value['index'])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromLabel(label : str):
|
|
||||||
tlist = [t for t in VideoEncoder if t.value['label'] == str(label)]
|
|
||||||
if tlist:
|
|
||||||
return tlist[0]
|
|
||||||
else:
|
|
||||||
return VideoEncoder.UNDEFINED
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def fromIndex(index : int):
|
|
||||||
tlist = [t for t in VideoEncoder if t.value['index'] == int(index)]
|
|
||||||
if tlist:
|
|
||||||
return tlist[0]
|
|
||||||
else:
|
|
||||||
return VideoEncoder.UNDEFINED
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
# Repo-root tests package for legacy and future test code.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
|
|
||||||
@@ -1,138 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from pathlib import Path
|
|
||||||
import tempfile
|
|
||||||
import unittest
|
|
||||||
|
|
||||||
from tests.support.ffx_bundle import (
|
|
||||||
PatternTrackSpec,
|
|
||||||
SourceTrackSpec,
|
|
||||||
add_show,
|
|
||||||
build_controller_context,
|
|
||||||
create_source_fixture,
|
|
||||||
dispose_controller_context,
|
|
||||||
expected_output_path,
|
|
||||||
run_ffx_convert,
|
|
||||||
)
|
|
||||||
|
|
||||||
from ffx.pattern_controller import PatternController
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
try:
|
|
||||||
import pytest
|
|
||||||
except ImportError: # pragma: no cover - unittest-only environments
|
|
||||||
pytest = None
|
|
||||||
|
|
||||||
if pytest is not None:
|
|
||||||
pytestmark = [pytest.mark.integration, pytest.mark.pattern_management]
|
|
||||||
|
|
||||||
|
|
||||||
class PatternManagementCliTests(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.tempdir = tempfile.TemporaryDirectory()
|
|
||||||
self.workdir = Path(self.tempdir.name)
|
|
||||||
self.home_dir = self.workdir / "home"
|
|
||||||
self.home_dir.mkdir()
|
|
||||||
self.database_path = self.workdir / "test.db"
|
|
||||||
|
|
||||||
def tearDown(self):
|
|
||||||
self.tempdir.cleanup()
|
|
||||||
|
|
||||||
def prepare_duplicate_matching_patterns(self):
|
|
||||||
context = build_controller_context(self.database_path)
|
|
||||||
try:
|
|
||||||
add_show(context, show_id=1)
|
|
||||||
add_show(context, show_id=2)
|
|
||||||
|
|
||||||
controller = PatternController(context)
|
|
||||||
track_descriptors = [
|
|
||||||
PatternTrackSpec(index=0, source_index=0, track_type=TrackType.VIDEO)
|
|
||||||
]
|
|
||||||
|
|
||||||
def to_track_descriptor(spec: PatternTrackSpec):
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
|
|
||||||
kwargs = {
|
|
||||||
TrackDescriptor.INDEX_KEY: spec.index,
|
|
||||||
TrackDescriptor.SOURCE_INDEX_KEY: spec.source_index,
|
|
||||||
TrackDescriptor.TRACK_TYPE_KEY: spec.track_type,
|
|
||||||
TrackDescriptor.TAGS_KEY: dict(spec.tags),
|
|
||||||
TrackDescriptor.DISPOSITION_SET_KEY: set(spec.dispositions),
|
|
||||||
}
|
|
||||||
return TrackDescriptor(**kwargs)
|
|
||||||
|
|
||||||
controller.savePatternSchema(
|
|
||||||
{"show_id": 1, "pattern": r"^dup_(s[0-9]+e[0-9]+)\.mkv$"},
|
|
||||||
[to_track_descriptor(track_descriptors[0])],
|
|
||||||
)
|
|
||||||
controller.savePatternSchema(
|
|
||||||
{"show_id": 2, "pattern": r"^dup_.*$"},
|
|
||||||
[to_track_descriptor(track_descriptors[0])],
|
|
||||||
)
|
|
||||||
finally:
|
|
||||||
dispose_controller_context(context)
|
|
||||||
|
|
||||||
def test_convert_fails_when_filename_matches_more_than_one_pattern(self):
|
|
||||||
self.prepare_duplicate_matching_patterns()
|
|
||||||
source_filename = "dup_s01e01.mkv"
|
|
||||||
source_path = create_source_fixture(
|
|
||||||
self.workdir,
|
|
||||||
source_filename,
|
|
||||||
[
|
|
||||||
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
|
|
||||||
SourceTrackSpec(TrackType.AUDIO, identity="audio-1", language="eng"),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
completed = run_ffx_convert(
|
|
||||||
self.workdir,
|
|
||||||
self.home_dir,
|
|
||||||
self.database_path,
|
|
||||||
"--video-encoder",
|
|
||||||
"copy",
|
|
||||||
"--no-tmdb",
|
|
||||||
"--no-prompt",
|
|
||||||
"--no-signature",
|
|
||||||
str(source_path),
|
|
||||||
)
|
|
||||||
|
|
||||||
self.assertNotEqual(completed.returncode, 0)
|
|
||||||
error_output = f"{completed.stdout}\n{completed.stderr}"
|
|
||||||
self.assertIn("matched more than one pattern", error_output)
|
|
||||||
self.assertFalse(expected_output_path(self.workdir, source_filename).exists())
|
|
||||||
|
|
||||||
def test_convert_can_ignore_duplicate_matches_when_no_pattern_is_requested(self):
|
|
||||||
self.prepare_duplicate_matching_patterns()
|
|
||||||
source_filename = "dup_s01e01.mkv"
|
|
||||||
source_path = create_source_fixture(
|
|
||||||
self.workdir,
|
|
||||||
source_filename,
|
|
||||||
[
|
|
||||||
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
|
|
||||||
SourceTrackSpec(TrackType.AUDIO, identity="audio-1", language="eng"),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
completed = run_ffx_convert(
|
|
||||||
self.workdir,
|
|
||||||
self.home_dir,
|
|
||||||
self.database_path,
|
|
||||||
"--video-encoder",
|
|
||||||
"copy",
|
|
||||||
"--no-pattern",
|
|
||||||
"--no-tmdb",
|
|
||||||
"--no-prompt",
|
|
||||||
"--no-signature",
|
|
||||||
str(source_path),
|
|
||||||
)
|
|
||||||
|
|
||||||
self.assertEqual(
|
|
||||||
0,
|
|
||||||
completed.returncode,
|
|
||||||
f"STDOUT:\n{completed.stdout}\nSTDERR:\n{completed.stderr}",
|
|
||||||
)
|
|
||||||
self.assertTrue(expected_output_path(self.workdir, source_filename).exists())
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
unittest.main()
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
|
|
||||||
@@ -1,436 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import json
|
|
||||||
from pathlib import Path
|
|
||||||
import tempfile
|
|
||||||
import unittest
|
|
||||||
|
|
||||||
from tests.support.ffx_bundle import (
|
|
||||||
PatternTrackSpec,
|
|
||||||
SourceTrackSpec,
|
|
||||||
create_source_fixture,
|
|
||||||
expected_output_path,
|
|
||||||
extract_first_subtitle_text,
|
|
||||||
ffprobe_json,
|
|
||||||
get_tag,
|
|
||||||
prepare_pattern_database,
|
|
||||||
run_ffx_convert,
|
|
||||||
write_vtt,
|
|
||||||
)
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
try:
|
|
||||||
import pytest
|
|
||||||
except ImportError: # pragma: no cover - unittest-only environments
|
|
||||||
pytest = None
|
|
||||||
|
|
||||||
if pytest is not None:
|
|
||||||
pytestmark = [pytest.mark.integration, pytest.mark.subtrack_mapping]
|
|
||||||
|
|
||||||
|
|
||||||
class SubtrackMappingBundleTests(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.tempdir = tempfile.TemporaryDirectory()
|
|
||||||
self.workdir = Path(self.tempdir.name)
|
|
||||||
self.home_dir = self.workdir / "home"
|
|
||||||
self.home_dir.mkdir()
|
|
||||||
self.database_path = self.workdir / "test.db"
|
|
||||||
|
|
||||||
def tearDown(self):
|
|
||||||
self.tempdir.cleanup()
|
|
||||||
|
|
||||||
def write_config(self, data: dict) -> None:
|
|
||||||
config_dir = self.home_dir / ".local" / "etc"
|
|
||||||
config_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
(config_dir / "ffx.json").write_text(json.dumps(data), encoding="utf-8")
|
|
||||||
|
|
||||||
def assertCompleted(self, completed):
|
|
||||||
if completed.returncode != 0:
|
|
||||||
self.fail(
|
|
||||||
"FFX convert failed\n"
|
|
||||||
f"STDOUT:\n{completed.stdout}\n"
|
|
||||||
f"STDERR:\n{completed.stderr}"
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_pattern_reorders_and_omits_tracks_preserving_metadata_and_group_order(self):
|
|
||||||
source_filename = "reorder_s01e01.mkv"
|
|
||||||
source_path = create_source_fixture(
|
|
||||||
self.workdir,
|
|
||||||
source_filename,
|
|
||||||
[
|
|
||||||
SourceTrackSpec(TrackType.VIDEO, identity="video-0", title="Video Zero"),
|
|
||||||
SourceTrackSpec(
|
|
||||||
TrackType.SUBTITLE,
|
|
||||||
identity="subtitle-1",
|
|
||||||
language="eng",
|
|
||||||
title="First Subtitle",
|
|
||||||
subtitle_lines=("first embedded subtitle",),
|
|
||||||
),
|
|
||||||
SourceTrackSpec(
|
|
||||||
TrackType.AUDIO,
|
|
||||||
identity="audio-2",
|
|
||||||
language="deu",
|
|
||||||
title="German Audio",
|
|
||||||
),
|
|
||||||
SourceTrackSpec(
|
|
||||||
TrackType.SUBTITLE,
|
|
||||||
identity="subtitle-3",
|
|
||||||
language="fra",
|
|
||||||
title="Second Subtitle",
|
|
||||||
subtitle_lines=("second embedded subtitle",),
|
|
||||||
),
|
|
||||||
SourceTrackSpec(TrackType.ATTACHMENT, attachment_name="ordered.ttf"),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
prepare_pattern_database(
|
|
||||||
self.database_path,
|
|
||||||
r"^reorder_(s[0-9]+e[0-9]+)\.mkv$",
|
|
||||||
[
|
|
||||||
PatternTrackSpec(
|
|
||||||
index=0,
|
|
||||||
source_index=0,
|
|
||||||
track_type=TrackType.VIDEO,
|
|
||||||
tags={"THIS_IS": "video-0", "title": "Video Zero"},
|
|
||||||
),
|
|
||||||
PatternTrackSpec(
|
|
||||||
index=1,
|
|
||||||
source_index=2,
|
|
||||||
track_type=TrackType.AUDIO,
|
|
||||||
tags={"THIS_IS": "audio-2", "language": "deu", "title": "German Audio"},
|
|
||||||
),
|
|
||||||
PatternTrackSpec(
|
|
||||||
index=2,
|
|
||||||
source_index=1,
|
|
||||||
track_type=TrackType.SUBTITLE,
|
|
||||||
tags={"THIS_IS": "subtitle-1", "language": "eng", "title": "First Subtitle"},
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
completed = run_ffx_convert(
|
|
||||||
self.workdir,
|
|
||||||
self.home_dir,
|
|
||||||
self.database_path,
|
|
||||||
"--video-encoder",
|
|
||||||
"copy",
|
|
||||||
"--no-tmdb",
|
|
||||||
"--no-prompt",
|
|
||||||
"--no-signature",
|
|
||||||
str(source_path),
|
|
||||||
)
|
|
||||||
self.assertCompleted(completed)
|
|
||||||
|
|
||||||
output_path = expected_output_path(self.workdir, source_filename)
|
|
||||||
self.assertTrue(output_path.is_file(), output_path)
|
|
||||||
|
|
||||||
streams = ffprobe_json(output_path)["streams"]
|
|
||||||
self.assertEqual(
|
|
||||||
[stream["codec_type"] for stream in streams],
|
|
||||||
["video", "audio", "subtitle", "attachment"],
|
|
||||||
)
|
|
||||||
self.assertEqual(
|
|
||||||
[get_tag(streams[index], "THIS_IS") for index in range(3)],
|
|
||||||
["video-0", "audio-2", "subtitle-1"],
|
|
||||||
)
|
|
||||||
self.assertNotIn(
|
|
||||||
"subtitle-3",
|
|
||||||
[get_tag(stream, "THIS_IS") for stream in streams if stream["codec_type"] != "attachment"],
|
|
||||||
)
|
|
||||||
self.assertEqual(streams[-1]["codec_name"], "ttf")
|
|
||||||
extracted_subtitle = extract_first_subtitle_text(self.workdir, output_path)
|
|
||||||
self.assertIn("first embedded subtitle", extracted_subtitle)
|
|
||||||
self.assertNotIn("second embedded subtitle", extracted_subtitle)
|
|
||||||
|
|
||||||
def test_cli_rearrange_streams_reorders_tracks_without_database_pattern(self):
|
|
||||||
source_filename = "cli_s01e01.mkv"
|
|
||||||
source_path = create_source_fixture(
|
|
||||||
self.workdir,
|
|
||||||
source_filename,
|
|
||||||
[
|
|
||||||
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
|
|
||||||
SourceTrackSpec(TrackType.AUDIO, identity="audio-1", language="eng", title="First Audio"),
|
|
||||||
SourceTrackSpec(TrackType.AUDIO, identity="audio-2", language="deu", title="Second Audio"),
|
|
||||||
SourceTrackSpec(TrackType.SUBTITLE, identity="subtitle-3", language="eng", title="Subtitle"),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
completed = run_ffx_convert(
|
|
||||||
self.workdir,
|
|
||||||
self.home_dir,
|
|
||||||
self.database_path,
|
|
||||||
"--video-encoder",
|
|
||||||
"copy",
|
|
||||||
"--no-pattern",
|
|
||||||
"--no-tmdb",
|
|
||||||
"--no-prompt",
|
|
||||||
"--no-signature",
|
|
||||||
"--rearrange-streams",
|
|
||||||
"0,2,1,3",
|
|
||||||
str(source_path),
|
|
||||||
)
|
|
||||||
self.assertCompleted(completed)
|
|
||||||
|
|
||||||
output_path = expected_output_path(self.workdir, source_filename)
|
|
||||||
streams = ffprobe_json(output_path)["streams"]
|
|
||||||
|
|
||||||
self.assertEqual(
|
|
||||||
[stream["codec_type"] for stream in streams],
|
|
||||||
["video", "audio", "audio", "subtitle"],
|
|
||||||
)
|
|
||||||
self.assertEqual(
|
|
||||||
[get_tag(stream, "THIS_IS") for stream in streams],
|
|
||||||
["video-0", "audio-2", "audio-1", "subtitle-3"],
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_no_pattern_stream_remove_list_clears_copied_stream_metadata(self):
|
|
||||||
source_filename = "remove_tags_s01e01.mkv"
|
|
||||||
self.write_config(
|
|
||||||
{
|
|
||||||
"metadata": {
|
|
||||||
"streams": {
|
|
||||||
"remove": ["BPS"],
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
source_path = create_source_fixture(
|
|
||||||
self.workdir,
|
|
||||||
source_filename,
|
|
||||||
[
|
|
||||||
SourceTrackSpec(
|
|
||||||
TrackType.VIDEO,
|
|
||||||
identity="video-0",
|
|
||||||
extra_tags={"BPS": "remove-me", "KEEP_ME": "video-keep"},
|
|
||||||
),
|
|
||||||
SourceTrackSpec(
|
|
||||||
TrackType.AUDIO,
|
|
||||||
identity="audio-1",
|
|
||||||
language="eng",
|
|
||||||
title="Main Audio",
|
|
||||||
extra_tags={"BPS": "remove-me", "KEEP_ME": "audio-keep"},
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
completed = run_ffx_convert(
|
|
||||||
self.workdir,
|
|
||||||
self.home_dir,
|
|
||||||
self.database_path,
|
|
||||||
"--video-encoder",
|
|
||||||
"copy",
|
|
||||||
"--no-pattern",
|
|
||||||
"--no-tmdb",
|
|
||||||
"--no-prompt",
|
|
||||||
"--no-signature",
|
|
||||||
str(source_path),
|
|
||||||
)
|
|
||||||
self.assertCompleted(completed)
|
|
||||||
|
|
||||||
output_path = expected_output_path(self.workdir, source_filename)
|
|
||||||
streams = ffprobe_json(output_path)["streams"]
|
|
||||||
|
|
||||||
self.assertEqual(
|
|
||||||
[stream["codec_type"] for stream in streams],
|
|
||||||
["video", "audio"],
|
|
||||||
)
|
|
||||||
self.assertEqual(get_tag(streams[0], "THIS_IS"), "video-0")
|
|
||||||
self.assertEqual(get_tag(streams[0], "KEEP_ME"), "video-keep")
|
|
||||||
self.assertIsNone(get_tag(streams[0], "BPS"))
|
|
||||||
self.assertEqual(get_tag(streams[1], "THIS_IS"), "audio-1")
|
|
||||||
self.assertEqual(get_tag(streams[1], "KEEP_ME"), "audio-keep")
|
|
||||||
self.assertIsNone(get_tag(streams[1], "BPS"))
|
|
||||||
|
|
||||||
def test_pattern_validation_fails_for_nonexistent_source_track_reference(self):
|
|
||||||
source_filename = "invalid_s01e01.mkv"
|
|
||||||
source_path = create_source_fixture(
|
|
||||||
self.workdir,
|
|
||||||
source_filename,
|
|
||||||
[
|
|
||||||
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
|
|
||||||
SourceTrackSpec(TrackType.AUDIO, identity="audio-1"),
|
|
||||||
SourceTrackSpec(TrackType.SUBTITLE, identity="subtitle-2"),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
prepare_pattern_database(
|
|
||||||
self.database_path,
|
|
||||||
r"^invalid_(s[0-9]+e[0-9]+)\.mkv$",
|
|
||||||
[
|
|
||||||
PatternTrackSpec(index=0, source_index=0, track_type=TrackType.VIDEO),
|
|
||||||
PatternTrackSpec(index=1, source_index=99, track_type=TrackType.SUBTITLE),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
completed = run_ffx_convert(
|
|
||||||
self.workdir,
|
|
||||||
self.home_dir,
|
|
||||||
self.database_path,
|
|
||||||
"--video-encoder",
|
|
||||||
"copy",
|
|
||||||
"--no-tmdb",
|
|
||||||
"--no-prompt",
|
|
||||||
"--no-signature",
|
|
||||||
str(source_path),
|
|
||||||
)
|
|
||||||
|
|
||||||
self.assertNotEqual(completed.returncode, 0)
|
|
||||||
error_output = f"{completed.stdout}\n{completed.stderr}"
|
|
||||||
self.assertIn("non-existent source track #99", error_output)
|
|
||||||
self.assertFalse(expected_output_path(self.workdir, source_filename).exists())
|
|
||||||
|
|
||||||
def test_external_subtitle_file_replaces_payload_and_overrides_metadata(self):
|
|
||||||
source_filename = "substitute_s01e01.mkv"
|
|
||||||
self.write_config(
|
|
||||||
{
|
|
||||||
"metadata": {
|
|
||||||
"streams": {
|
|
||||||
"remove": ["BPS"],
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
source_path = create_source_fixture(
|
|
||||||
self.workdir,
|
|
||||||
source_filename,
|
|
||||||
[
|
|
||||||
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
|
|
||||||
SourceTrackSpec(TrackType.AUDIO, identity="audio-1", language="eng", title="Main Audio"),
|
|
||||||
SourceTrackSpec(
|
|
||||||
TrackType.SUBTITLE,
|
|
||||||
identity="embedded-subtitle",
|
|
||||||
language="eng",
|
|
||||||
title="Embedded Title",
|
|
||||||
extra_tags={"BPS": "remove-me", "EXTERNAL_KEEP": "keep-me"},
|
|
||||||
subtitle_lines=("embedded subtitle payload",),
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
write_vtt(
|
|
||||||
self.workdir / "substitute_s01e01_2_deu.vtt",
|
|
||||||
("external subtitle payload",),
|
|
||||||
)
|
|
||||||
|
|
||||||
prepare_pattern_database(
|
|
||||||
self.database_path,
|
|
||||||
r"^substitute_(s[0-9]+e[0-9]+)\.mkv$",
|
|
||||||
[
|
|
||||||
PatternTrackSpec(index=0, source_index=0, track_type=TrackType.VIDEO),
|
|
||||||
PatternTrackSpec(index=1, source_index=1, track_type=TrackType.AUDIO),
|
|
||||||
PatternTrackSpec(index=2, source_index=2, track_type=TrackType.SUBTITLE),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
completed = run_ffx_convert(
|
|
||||||
self.workdir,
|
|
||||||
self.home_dir,
|
|
||||||
self.database_path,
|
|
||||||
"--video-encoder",
|
|
||||||
"copy",
|
|
||||||
"--no-tmdb",
|
|
||||||
"--no-prompt",
|
|
||||||
"--no-signature",
|
|
||||||
"--subtitle-directory",
|
|
||||||
str(self.workdir),
|
|
||||||
"--subtitle-prefix",
|
|
||||||
"substitute",
|
|
||||||
str(source_path),
|
|
||||||
)
|
|
||||||
self.assertCompleted(completed)
|
|
||||||
|
|
||||||
output_path = expected_output_path(self.workdir, source_filename)
|
|
||||||
streams = ffprobe_json(output_path)["streams"]
|
|
||||||
subtitle_stream = [stream for stream in streams if stream["codec_type"] == "subtitle"][0]
|
|
||||||
|
|
||||||
self.assertEqual(get_tag(subtitle_stream, "language"), "deu")
|
|
||||||
self.assertEqual(get_tag(subtitle_stream, "title"), "Embedded Title")
|
|
||||||
self.assertEqual(get_tag(subtitle_stream, "THIS_IS"), "embedded-subtitle")
|
|
||||||
self.assertEqual(get_tag(subtitle_stream, "EXTERNAL_KEEP"), "keep-me")
|
|
||||||
self.assertIsNone(get_tag(subtitle_stream, "BPS"))
|
|
||||||
|
|
||||||
extracted_subtitle = extract_first_subtitle_text(self.workdir, output_path)
|
|
||||||
self.assertIn("external subtitle payload", extracted_subtitle)
|
|
||||||
self.assertNotIn("embedded subtitle payload", extracted_subtitle)
|
|
||||||
|
|
||||||
def test_subtitle_prefix_uses_configured_base_directory_when_directory_is_omitted(self):
|
|
||||||
source_filename = "substitute_default_s01e01.mkv"
|
|
||||||
subtitle_prefix = "substitute_default"
|
|
||||||
subtitles_base_dir = self.home_dir / ".local" / "var" / "sync" / "subtitles"
|
|
||||||
resolved_subtitle_dir = subtitles_base_dir / subtitle_prefix
|
|
||||||
resolved_subtitle_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
self.write_config(
|
|
||||||
{
|
|
||||||
"subtitlesDirectory": "~/.local/var/sync/subtitles",
|
|
||||||
"metadata": {
|
|
||||||
"streams": {
|
|
||||||
"remove": ["BPS"],
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
source_path = create_source_fixture(
|
|
||||||
self.workdir,
|
|
||||||
source_filename,
|
|
||||||
[
|
|
||||||
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
|
|
||||||
SourceTrackSpec(TrackType.AUDIO, identity="audio-1", language="eng", title="Main Audio"),
|
|
||||||
SourceTrackSpec(
|
|
||||||
TrackType.SUBTITLE,
|
|
||||||
identity="embedded-subtitle",
|
|
||||||
language="eng",
|
|
||||||
title="Embedded Title",
|
|
||||||
extra_tags={"BPS": "remove-me", "EXTERNAL_KEEP": "keep-me"},
|
|
||||||
subtitle_lines=("embedded subtitle payload",),
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
write_vtt(
|
|
||||||
resolved_subtitle_dir / f"{subtitle_prefix}_s01e01_2_deu.vtt",
|
|
||||||
("external subtitle payload",),
|
|
||||||
)
|
|
||||||
|
|
||||||
prepare_pattern_database(
|
|
||||||
self.database_path,
|
|
||||||
r"^substitute_default_(s[0-9]+e[0-9]+)\.mkv$",
|
|
||||||
[
|
|
||||||
PatternTrackSpec(index=0, source_index=0, track_type=TrackType.VIDEO),
|
|
||||||
PatternTrackSpec(index=1, source_index=1, track_type=TrackType.AUDIO),
|
|
||||||
PatternTrackSpec(index=2, source_index=2, track_type=TrackType.SUBTITLE),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
completed = run_ffx_convert(
|
|
||||||
self.workdir,
|
|
||||||
self.home_dir,
|
|
||||||
self.database_path,
|
|
||||||
"--video-encoder",
|
|
||||||
"copy",
|
|
||||||
"--no-tmdb",
|
|
||||||
"--no-prompt",
|
|
||||||
"--no-signature",
|
|
||||||
"--subtitle-prefix",
|
|
||||||
subtitle_prefix,
|
|
||||||
str(source_path),
|
|
||||||
)
|
|
||||||
self.assertCompleted(completed)
|
|
||||||
|
|
||||||
output_path = expected_output_path(self.workdir, source_filename)
|
|
||||||
streams = ffprobe_json(output_path)["streams"]
|
|
||||||
subtitle_stream = [stream for stream in streams if stream["codec_type"] == "subtitle"][0]
|
|
||||||
|
|
||||||
self.assertEqual(get_tag(subtitle_stream, "language"), "deu")
|
|
||||||
self.assertEqual(get_tag(subtitle_stream, "title"), "Embedded Title")
|
|
||||||
self.assertEqual(get_tag(subtitle_stream, "THIS_IS"), "embedded-subtitle")
|
|
||||||
self.assertEqual(get_tag(subtitle_stream, "EXTERNAL_KEEP"), "keep-me")
|
|
||||||
self.assertIsNone(get_tag(subtitle_stream, "BPS"))
|
|
||||||
|
|
||||||
extracted_subtitle = extract_first_subtitle_text(self.workdir, output_path)
|
|
||||||
self.assertIn("external subtitle payload", extracted_subtitle)
|
|
||||||
self.assertNotIn("embedded subtitle payload", extracted_subtitle)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
unittest.main()
|
|
||||||
@@ -1,228 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
import tempfile
|
|
||||||
import unittest
|
|
||||||
|
|
||||||
from tests.support.ffx_bundle import (
|
|
||||||
SourceTrackSpec,
|
|
||||||
build_controller_context,
|
|
||||||
create_source_fixture,
|
|
||||||
dispose_controller_context,
|
|
||||||
)
|
|
||||||
|
|
||||||
from ffx.pattern_controller import PatternController
|
|
||||||
from ffx.show_controller import ShowController
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
from ffx.track_codec import TrackCodec
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
try:
|
|
||||||
import pytest
|
|
||||||
except ImportError: # pragma: no cover - unittest-only environments
|
|
||||||
pytest = None
|
|
||||||
|
|
||||||
if pytest is not None:
|
|
||||||
pytestmark = [pytest.mark.integration]
|
|
||||||
|
|
||||||
|
|
||||||
SRC_ROOT = Path(__file__).resolve().parents[2] / "src"
|
|
||||||
|
|
||||||
|
|
||||||
def run_ffx_unmux(workdir: Path, home_dir: Path, database_path: Path, *args: str) -> subprocess.CompletedProcess[str]:
|
|
||||||
env = os.environ.copy()
|
|
||||||
env["HOME"] = str(home_dir)
|
|
||||||
existing_pythonpath = env.get("PYTHONPATH", "")
|
|
||||||
env["PYTHONPATH"] = str(SRC_ROOT) if not existing_pythonpath else f"{SRC_ROOT}{os.pathsep}{existing_pythonpath}"
|
|
||||||
|
|
||||||
command = [
|
|
||||||
sys.executable,
|
|
||||||
"-m",
|
|
||||||
"ffx",
|
|
||||||
"--database-file",
|
|
||||||
str(database_path),
|
|
||||||
"unmux",
|
|
||||||
*args,
|
|
||||||
]
|
|
||||||
return subprocess.run(command, cwd=workdir, env=env, capture_output=True, text=True)
|
|
||||||
|
|
||||||
|
|
||||||
class UnmuxCliTests(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
self.tempdir = tempfile.TemporaryDirectory()
|
|
||||||
self.workdir = Path(self.tempdir.name)
|
|
||||||
self.home_dir = self.workdir / "home"
|
|
||||||
self.home_dir.mkdir()
|
|
||||||
self.database_path = self.workdir / "test.db"
|
|
||||||
|
|
||||||
def tearDown(self):
|
|
||||||
self.tempdir.cleanup()
|
|
||||||
|
|
||||||
def write_config(self, data: dict) -> None:
|
|
||||||
config_dir = self.home_dir / ".local" / "etc"
|
|
||||||
config_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
(config_dir / "ffx.json").write_text(json.dumps(data), encoding="utf-8")
|
|
||||||
|
|
||||||
def assertCompleted(self, completed):
|
|
||||||
if completed.returncode != 0:
|
|
||||||
self.fail(
|
|
||||||
"FFX unmux failed\n"
|
|
||||||
f"STDOUT:\n{completed.stdout}\n"
|
|
||||||
f"STDERR:\n{completed.stderr}"
|
|
||||||
)
|
|
||||||
|
|
||||||
def seed_matching_show(self, pattern_expression: str, *, indicator_season_digits: int, indicator_episode_digits: int) -> None:
|
|
||||||
context = build_controller_context(self.database_path)
|
|
||||||
try:
|
|
||||||
ShowController(context).updateShow(
|
|
||||||
ShowDescriptor(
|
|
||||||
id=1,
|
|
||||||
name="Unmux Test Show",
|
|
||||||
year=2000,
|
|
||||||
indicator_season_digits=indicator_season_digits,
|
|
||||||
indicator_episode_digits=indicator_episode_digits,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
PatternController(context).savePatternSchema(
|
|
||||||
{
|
|
||||||
"show_id": 1,
|
|
||||||
"pattern": pattern_expression,
|
|
||||||
"quality": 0,
|
|
||||||
"notes": "",
|
|
||||||
},
|
|
||||||
trackDescriptors=[
|
|
||||||
TrackDescriptor(
|
|
||||||
index=0,
|
|
||||||
source_index=0,
|
|
||||||
track_type=TrackType.VIDEO,
|
|
||||||
codec_name=TrackCodec.H264,
|
|
||||||
tags={},
|
|
||||||
disposition_set=set(),
|
|
||||||
)
|
|
||||||
],
|
|
||||||
)
|
|
||||||
finally:
|
|
||||||
dispose_controller_context(context)
|
|
||||||
|
|
||||||
def test_subtitles_only_without_output_directory_uses_configured_base_plus_label(self):
|
|
||||||
self.write_config(
|
|
||||||
{
|
|
||||||
"subtitlesDirectory": "~/.local/var/sync/subtitles",
|
|
||||||
}
|
|
||||||
)
|
|
||||||
source_filename = "unmux_s01e01.mkv"
|
|
||||||
source_path = create_source_fixture(
|
|
||||||
self.workdir,
|
|
||||||
source_filename,
|
|
||||||
[
|
|
||||||
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
|
|
||||||
SourceTrackSpec(
|
|
||||||
TrackType.SUBTITLE,
|
|
||||||
identity="subtitle-1",
|
|
||||||
language="eng",
|
|
||||||
subtitle_lines=("subtitle payload",),
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
completed = run_ffx_unmux(
|
|
||||||
self.workdir,
|
|
||||||
self.home_dir,
|
|
||||||
self.database_path,
|
|
||||||
"--subtitles-only",
|
|
||||||
"--label",
|
|
||||||
"dball",
|
|
||||||
str(source_path),
|
|
||||||
)
|
|
||||||
self.assertCompleted(completed)
|
|
||||||
|
|
||||||
expected_directory = self.home_dir / ".local" / "var" / "sync" / "subtitles" / "dball"
|
|
||||||
self.assertTrue(expected_directory.is_dir(), expected_directory)
|
|
||||||
|
|
||||||
def test_unmux_uses_configured_indicator_digits_in_output_filenames(self):
|
|
||||||
self.write_config(
|
|
||||||
{
|
|
||||||
"defaultIndicatorSeasonDigits": 3,
|
|
||||||
"defaultIndicatorEpisodeDigits": 4,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
source_filename = "unmux_s01e01.mkv"
|
|
||||||
output_directory = self.workdir / "unmux-output"
|
|
||||||
output_directory.mkdir()
|
|
||||||
source_path = create_source_fixture(
|
|
||||||
self.workdir,
|
|
||||||
source_filename,
|
|
||||||
[
|
|
||||||
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
completed = run_ffx_unmux(
|
|
||||||
self.workdir,
|
|
||||||
self.home_dir,
|
|
||||||
self.database_path,
|
|
||||||
"--label",
|
|
||||||
"dball",
|
|
||||||
"--output-directory",
|
|
||||||
str(output_directory),
|
|
||||||
str(source_path),
|
|
||||||
)
|
|
||||||
self.assertCompleted(completed)
|
|
||||||
|
|
||||||
output_filenames = sorted(path.name for path in output_directory.iterdir())
|
|
||||||
self.assertEqual(1, len(output_filenames), output_filenames)
|
|
||||||
self.assertTrue(
|
|
||||||
output_filenames[0].startswith("dball_S001E0001_"),
|
|
||||||
output_filenames,
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_unmux_prefers_matched_show_indicator_digits_over_config_defaults(self):
|
|
||||||
self.write_config(
|
|
||||||
{
|
|
||||||
"defaultIndicatorSeasonDigits": 4,
|
|
||||||
"defaultIndicatorEpisodeDigits": 4,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
self.seed_matching_show(
|
|
||||||
r"^unmux_([sS][0-9]+[eE][0-9]+)\.mkv$",
|
|
||||||
indicator_season_digits=1,
|
|
||||||
indicator_episode_digits=3,
|
|
||||||
)
|
|
||||||
source_filename = "unmux_s01e01.mkv"
|
|
||||||
output_directory = self.workdir / "unmux-output"
|
|
||||||
output_directory.mkdir()
|
|
||||||
source_path = create_source_fixture(
|
|
||||||
self.workdir,
|
|
||||||
source_filename,
|
|
||||||
[
|
|
||||||
SourceTrackSpec(TrackType.VIDEO, identity="video-0"),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
completed = run_ffx_unmux(
|
|
||||||
self.workdir,
|
|
||||||
self.home_dir,
|
|
||||||
self.database_path,
|
|
||||||
"--label",
|
|
||||||
"dball",
|
|
||||||
"--output-directory",
|
|
||||||
str(output_directory),
|
|
||||||
str(source_path),
|
|
||||||
)
|
|
||||||
self.assertCompleted(completed)
|
|
||||||
|
|
||||||
output_filenames = sorted(path.name for path in output_directory.iterdir())
|
|
||||||
self.assertEqual(1, len(output_filenames), output_filenames)
|
|
||||||
self.assertTrue(
|
|
||||||
output_filenames[0].startswith("dball_S1E001_"),
|
|
||||||
output_filenames,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
unittest.main()
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
# Legacy custom FFX test harness modules.
|
|
||||||
@@ -1,64 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
|
|
||||||
from .basename_combinator import BasenameCombinator
|
|
||||||
|
|
||||||
from .indicator_combinator import IndicatorCombinator
|
|
||||||
from .label_combinator import LabelCombinator
|
|
||||||
|
|
||||||
class BasenameCombinator2(BasenameCombinator):
|
|
||||||
"""documentation_site"""
|
|
||||||
|
|
||||||
VARIANT = 'B2'
|
|
||||||
|
|
||||||
# def __init__(self, SubCombinators: dict = {}, context = None):
|
|
||||||
def __init__(self, context = None):
|
|
||||||
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return BasenameCombinator2.VARIANT
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
return ''
|
|
||||||
|
|
||||||
def assertFunc(self, mediaDescriptor: MediaDescriptor):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
|
|
||||||
for L in LabelCombinator.getAllClassReferences():
|
|
||||||
# for I in IndicatorCombinator.getAllClassReferences():
|
|
||||||
# for S in SiteCombinator.getAllClassReferences():
|
|
||||||
# for T in TitleCombinator.getAllClassReferences():
|
|
||||||
#
|
|
||||||
|
|
||||||
l = L(self._context)
|
|
||||||
|
|
||||||
yieldObj = {}
|
|
||||||
|
|
||||||
yieldObj['identifier'] = self.getIdentifier()
|
|
||||||
|
|
||||||
yieldObj['variants'] = [self.getVariant(),
|
|
||||||
l.getVariant()]
|
|
||||||
|
|
||||||
yieldObj['payload'] = {'label': l.getPayload()}
|
|
||||||
|
|
||||||
yieldObj['assertSelectors'] = ['B', 'L']
|
|
||||||
|
|
||||||
yieldObj['assertFuncs'] = [self.assertFunc,
|
|
||||||
l.assertFunc]
|
|
||||||
|
|
||||||
yieldObj['shouldFail'] = (self.shouldFail()
|
|
||||||
| l.shouldFail())
|
|
||||||
|
|
||||||
yield yieldObj
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
class BasenameCombinator():
|
|
||||||
|
|
||||||
IDENTIFIER = 'basename'
|
|
||||||
|
|
||||||
BASENAME = 'media'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getIdentifier(self):
|
|
||||||
return BasenameCombinator.IDENTIFIER
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
return [os.path.basename(p)[20:-3]
|
|
||||||
for p
|
|
||||||
in glob.glob(f"{ basePath }/basename_combinator_*.py", recursive = True)
|
|
||||||
if p != __file__]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(identifier):
|
|
||||||
module_name = f"tests.legacy.basename_combinator_{ identifier }"
|
|
||||||
importlib.import_module(module_name)
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[module_name]):
|
|
||||||
#HINT: Excluding MediaCombinator as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'BasenameCombinator' and name.startswith('BasenameCombinator'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [BasenameCombinator.getClassReference(i) for i in BasenameCombinator.list()]
|
|
||||||
@@ -1,107 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
|
|
||||||
from .basename_combinator import BasenameCombinator
|
|
||||||
|
|
||||||
from .indicator_combinator import IndicatorCombinator
|
|
||||||
from .label_combinator import LabelCombinator
|
|
||||||
|
|
||||||
|
|
||||||
class BasenameCombinator0(BasenameCombinator):
|
|
||||||
"""base[_indicator]"""
|
|
||||||
|
|
||||||
VARIANT = 'B0'
|
|
||||||
|
|
||||||
# def __init__(self, SubCombinators: dict = {}, context = None):
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return BasenameCombinator0.VARIANT
|
|
||||||
|
|
||||||
def getPayload(self, indicator = '', label = ''):
|
|
||||||
|
|
||||||
basename = BasenameCombinator.BASENAME
|
|
||||||
expectedBasename = label if label else BasenameCombinator.BASENAME
|
|
||||||
|
|
||||||
if indicator:
|
|
||||||
basename += f"_{indicator}"
|
|
||||||
expectedBasename += f"_{indicator}"
|
|
||||||
|
|
||||||
return {'basename': basename,
|
|
||||||
'label': label,
|
|
||||||
'expectedBasename': expectedBasename}
|
|
||||||
|
|
||||||
|
|
||||||
def assertFunc(self, indicator = '', label = ''):
|
|
||||||
|
|
||||||
def f(testObj: dict = {}):
|
|
||||||
|
|
||||||
if not 'filenames' in testObj.keys():
|
|
||||||
raise KeyError("testObj does not contain key 'filenames'")
|
|
||||||
|
|
||||||
fNames = testObj['filenames']
|
|
||||||
|
|
||||||
assert len(fNames) == 1, "More than one result file was created"
|
|
||||||
|
|
||||||
resultFilename = fNames[0]
|
|
||||||
|
|
||||||
fTokens = resultFilename.split('.')
|
|
||||||
|
|
||||||
resultBasename = '.'.join(fTokens[:-1])
|
|
||||||
resultExtension = fTokens[-1]
|
|
||||||
|
|
||||||
if not indicator and not label:
|
|
||||||
|
|
||||||
assert resultBasename == BasenameCombinator.BASENAME, f"Result basename is not {BasenameCombinator.BASENAME}"
|
|
||||||
if not indicator and label:
|
|
||||||
assert resultBasename == label, f"Result basename is not {label}"
|
|
||||||
if indicator and not label:
|
|
||||||
assert resultBasename == f"{BasenameCombinator.BASENAME}_{indicator}", f"Result basename is not {BasenameCombinator.BASENAME}_{indicator}"
|
|
||||||
if indicator and label:
|
|
||||||
assert resultBasename == f"{label}_{indicator}", f"Result basename is not {label}_{indicator}"
|
|
||||||
|
|
||||||
return f
|
|
||||||
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
|
|
||||||
ic = IndicatorCombinator(self._context)
|
|
||||||
|
|
||||||
for L in LabelCombinator.getAllClassReferences():
|
|
||||||
for i in ic.getYield():
|
|
||||||
|
|
||||||
l = L(self._context)
|
|
||||||
|
|
||||||
indicator = i['indicator']
|
|
||||||
indicatorVariant = i['variant']
|
|
||||||
|
|
||||||
yieldObj = {}
|
|
||||||
|
|
||||||
yieldObj['identifier'] = self.getIdentifier()
|
|
||||||
|
|
||||||
yieldObj['variants'] = [self.getVariant(),
|
|
||||||
l.getVariant(),
|
|
||||||
indicatorVariant]
|
|
||||||
|
|
||||||
yieldObj['payload'] = self.getPayload(indicator = indicator,
|
|
||||||
label = l.getPayload())
|
|
||||||
|
|
||||||
yieldObj['assertSelectors'] = ['B', 'L', 'I']
|
|
||||||
|
|
||||||
yieldObj['assertFuncs'] = [self.assertFunc(indicator, l.getPayload()), l.assertFunc, ic.assertFunc]
|
|
||||||
|
|
||||||
yieldObj['shouldFail'] = (self.shouldFail()
|
|
||||||
| l.shouldFail()
|
|
||||||
| ic.shouldFail())
|
|
||||||
|
|
||||||
yield yieldObj
|
|
||||||
@@ -1,159 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
|
|
||||||
from .basename_combinator import BasenameCombinator
|
|
||||||
|
|
||||||
from .indicator_combinator import IndicatorCombinator
|
|
||||||
from .label_combinator import LabelCombinator
|
|
||||||
from .title_combinator import TitleCombinator
|
|
||||||
from .release_combinator import ReleaseCombinator
|
|
||||||
from .show_combinator import ShowCombinator
|
|
||||||
|
|
||||||
|
|
||||||
class BasenameCombinator2(BasenameCombinator):
|
|
||||||
"""show[_indicator]_group"""
|
|
||||||
|
|
||||||
VARIANT = 'B2'
|
|
||||||
|
|
||||||
# def __init__(self, SubCombinators: dict = {}, context = None):
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return BasenameCombinator2.VARIANT
|
|
||||||
|
|
||||||
#
|
|
||||||
# SHOW_LIST = [
|
|
||||||
# 'Boruto: Naruto Next Generations (2017)',
|
|
||||||
# 'The Rising of the Shield Hero (2019)',
|
|
||||||
# 'Scrubs - Die Anfänger (2001)'
|
|
||||||
# ]
|
|
||||||
#
|
|
||||||
# RELEASE_LIST = [
|
|
||||||
# ".GerEngSub.AAC.1080pINDICATOR.WebDL.x264-Tanuki",
|
|
||||||
# ".German.AC3.DL.1080pINDICATOR.BluRay.x264-AST4u",
|
|
||||||
# "-720pINDICATOR"
|
|
||||||
# ]
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self,
|
|
||||||
indicator = '',
|
|
||||||
label = '',
|
|
||||||
show = '',
|
|
||||||
release = ''):
|
|
||||||
|
|
||||||
if label:
|
|
||||||
basename = label
|
|
||||||
expectedBasename = label
|
|
||||||
if indicator:
|
|
||||||
basename += f"_{indicator}"
|
|
||||||
expectedBasename += f"_{indicator}"
|
|
||||||
else:
|
|
||||||
basename = show+release
|
|
||||||
expectedBasename = basename
|
|
||||||
|
|
||||||
return {'basename': basename,
|
|
||||||
'label': label,
|
|
||||||
'expectedBasename': expectedBasename}
|
|
||||||
|
|
||||||
|
|
||||||
def createAssertFunc(self,
|
|
||||||
indicator = '',
|
|
||||||
label = '',
|
|
||||||
show = '',
|
|
||||||
release = ''):
|
|
||||||
|
|
||||||
def f(testObj: dict = {}):
|
|
||||||
|
|
||||||
if not 'filenames' in testObj.keys():
|
|
||||||
raise KeyError("testObj does not contain key 'filenames'")
|
|
||||||
|
|
||||||
fNames = testObj['filenames']
|
|
||||||
|
|
||||||
assert len(fNames) == 1, "More than one result file was created"
|
|
||||||
|
|
||||||
resultFilename = fNames[0]
|
|
||||||
|
|
||||||
fTokens = resultFilename.split('.')
|
|
||||||
|
|
||||||
resultBasename = '.'.join(fTokens[:-1])
|
|
||||||
resultExtension = fTokens[-1]
|
|
||||||
|
|
||||||
if not indicator and not label:
|
|
||||||
assert resultBasename == show+release, f"Result basename is not {show+release}"
|
|
||||||
elif not indicator and label:
|
|
||||||
assert resultBasename == label, f"Result basename is not {label}"
|
|
||||||
elif indicator and not label:
|
|
||||||
assert resultBasename == show+release, f"Result basename is not {show+release}"
|
|
||||||
elif indicator and label:
|
|
||||||
assert resultBasename == f"{label}_{indicator}", f"Result basename is not {label}_{indicator}"
|
|
||||||
|
|
||||||
return f
|
|
||||||
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
|
|
||||||
ic = IndicatorCombinator(self._context)
|
|
||||||
sc = ShowCombinator(self._context)
|
|
||||||
|
|
||||||
for L in LabelCombinator.getAllClassReferences():
|
|
||||||
for iy in ic.getYield():
|
|
||||||
|
|
||||||
indicator = iy['indicator']
|
|
||||||
indicatorVariant = iy['variant']
|
|
||||||
|
|
||||||
rc = ReleaseCombinator(self._context, indicator=indicator)
|
|
||||||
|
|
||||||
for sy in sc.getYield():
|
|
||||||
for ry in rc.getYield():
|
|
||||||
|
|
||||||
l = L(self._context)
|
|
||||||
|
|
||||||
show = sy['show']
|
|
||||||
showVariant = sy['variant']
|
|
||||||
|
|
||||||
release = ry['release']
|
|
||||||
releaseVariant = ry['variant']
|
|
||||||
|
|
||||||
yieldObj = {}
|
|
||||||
|
|
||||||
yieldObj['identifier'] = self.getIdentifier()
|
|
||||||
|
|
||||||
yieldObj['variants'] = [self.getVariant(),
|
|
||||||
l.getVariant(),
|
|
||||||
indicatorVariant,
|
|
||||||
showVariant,
|
|
||||||
releaseVariant]
|
|
||||||
|
|
||||||
yieldObj['payload'] = self.getPayload(indicator = indicator,
|
|
||||||
label = l.getPayload(),
|
|
||||||
show = show,
|
|
||||||
release = release)
|
|
||||||
|
|
||||||
yieldObj['assertSelectors'] = ['B', 'L', 'I', 'S', 'R']
|
|
||||||
|
|
||||||
yieldObj['assertFuncs'] = [self.createAssertFunc(indicator,
|
|
||||||
l.getPayload(),
|
|
||||||
show = show,
|
|
||||||
release = release),
|
|
||||||
l.assertFunc,
|
|
||||||
ic.assertFunc,
|
|
||||||
sc.assertFunc,
|
|
||||||
rc.assertFunc]
|
|
||||||
|
|
||||||
yieldObj['shouldFail'] = (self.shouldFail()
|
|
||||||
| l.shouldFail()
|
|
||||||
| ic.shouldFail()
|
|
||||||
| sc.shouldFail()
|
|
||||||
| rc.shouldFail())
|
|
||||||
|
|
||||||
yield yieldObj
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
class Combinator():
|
|
||||||
|
|
||||||
def __init__(self, SubCombinations: dict):
|
|
||||||
self._SubCombinators = SubCombinations
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def assertFunc(self, testObj):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def getYield(yieldObj: dict):
|
|
||||||
pass
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
class DispositionCombinator2():
|
|
||||||
|
|
||||||
IDENTIFIER = 'disposition2'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getIdentifier(self):
|
|
||||||
return DispositionCombinator2.IDENTIFIER
|
|
||||||
def getVariant(self):
|
|
||||||
return self._variant
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
return [os.path.basename(p)[25:-3]
|
|
||||||
for p
|
|
||||||
in glob.glob(f"{ basePath }/disposition_combinator_2_*.py", recursive = True)
|
|
||||||
if p != __file__]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(identifier):
|
|
||||||
module_name = f"tests.legacy.disposition_combinator_2_{ identifier }"
|
|
||||||
importlib.import_module(module_name)
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[module_name]):
|
|
||||||
#HINT: Excluding DispositionCombination as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'DispositionCombinator2' and name.startswith('DispositionCombinator2'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [DispositionCombinator2.getClassReference(i) for i in DispositionCombinator2.list()]
|
|
||||||
@@ -1,76 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_2 import DispositionCombinator2
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator20(DispositionCombinator2):
|
|
||||||
|
|
||||||
# COMMENT
|
|
||||||
# DESCRIPTIONS
|
|
||||||
|
|
||||||
VARIANT = 'D00'
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator20.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set()
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0.add(TrackDisposition.FORCED) # COMMENT
|
|
||||||
# subtrack1.add(TrackDisposition.DESCRIPTIONS) # DESCRIPTIONS
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1)
|
|
||||||
|
|
||||||
def createAssertFunc(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set 'forced' disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DESCRIPTIONS)
|
|
||||||
# ), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved set 'descriptions' disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
|
|
||||||
return f
|
|
||||||
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
@@ -1,79 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_2 import DispositionCombinator2
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator21(DispositionCombinator2):
|
|
||||||
|
|
||||||
VARIANT = 'D10'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator21.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set([TrackDisposition.DEFAULT])
|
|
||||||
else:
|
|
||||||
subtrack0 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack1 = set()
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
|
||||||
subtrack1.add(TrackDisposition.FORCED) # DESCRIPTIONS
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1)
|
|
||||||
|
|
||||||
def createAssertFunc(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set 'comment' disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert not (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved set 'forced' disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
|
|
||||||
return f
|
|
||||||
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
@@ -1,79 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_2 import DispositionCombinator2
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator22(DispositionCombinator2):
|
|
||||||
|
|
||||||
VARIANT = 'D01'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator22.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack1 = set()
|
|
||||||
else:
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set([TrackDisposition.DEFAULT])
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0.add(TrackDisposition.FORCED) # COMMENT
|
|
||||||
# subtrack1.add(TrackDisposition.DESCRIPTIONS) # DESCRIPTIONS
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1)
|
|
||||||
|
|
||||||
|
|
||||||
def createAssertFunc(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set 'descriptions' disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DESCRIPTIONS)
|
|
||||||
# ), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved 'forced' disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not set default disposition"
|
|
||||||
|
|
||||||
return f
|
|
||||||
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
@@ -1,43 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_2 import DispositionCombinator2
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator23(DispositionCombinator2):
|
|
||||||
|
|
||||||
VARIANT = 'D11'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator23.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
subtrack0 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack1 = set([TrackDisposition.DEFAULT])
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
|
||||||
subtrack1.add(TrackDisposition.FORCED) # DESCRIPTIONS
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1)
|
|
||||||
|
|
||||||
#TODO: tmdb cases
|
|
||||||
def createAssertFunc(self):
|
|
||||||
def f(assertObj: dict = {}):
|
|
||||||
pass
|
|
||||||
return f
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return True
|
|
||||||
@@ -1,35 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
class DispositionCombinator3():
|
|
||||||
|
|
||||||
IDENTIFIER = 'disposition3'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getIdentifier(self):
|
|
||||||
return DispositionCombinator3.IDENTIFIER
|
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
return [os.path.basename(p)[25:-3]
|
|
||||||
for p
|
|
||||||
in glob.glob(f"{ basePath }/disposition_combinator_3_*.py", recursive = True)
|
|
||||||
if p != __file__]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(identifier):
|
|
||||||
module_name = f"tests.legacy.disposition_combinator_3_{ identifier }"
|
|
||||||
importlib.import_module(module_name)
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[module_name]):
|
|
||||||
#HINT: Excluding DispositionCombination as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'DispositionCombinator3' and name.startswith('DispositionCombinator3'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [DispositionCombinator3.getClassReference(i) for i in DispositionCombinator3.list()]
|
|
||||||
@@ -1,92 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_3 import DispositionCombinator3
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator30(DispositionCombinator3):
|
|
||||||
|
|
||||||
VARIANT = 'D000'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator30.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set()
|
|
||||||
subtrack2 = set()
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0.add(TrackDisposition.FORCED) # COMMENT
|
|
||||||
# subtrack1.add(TrackDisposition.DESCRIPTIONS) # DESCRIPTIONS
|
|
||||||
# subtrack2.add(TrackDisposition.HEARING_IMPAIRED) # HEARING_IMPAIRED
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1,
|
|
||||||
subtrack2)
|
|
||||||
|
|
||||||
|
|
||||||
def createAssertFunc(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert not (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
# source subIndex 1
|
|
||||||
assert not (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DESCRIPTIONS)
|
|
||||||
# ), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
# source subIndex 2
|
|
||||||
assert not (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.HEARING_IMPAIRED)
|
|
||||||
# ), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert not (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
|
|
||||||
# source subIndex 1
|
|
||||||
assert not (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
|
|
||||||
# source subIndex 2
|
|
||||||
assert not (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
|
||||||
return f
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
@@ -1,97 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_3 import DispositionCombinator3
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator31(DispositionCombinator3):
|
|
||||||
|
|
||||||
VARIANT = 'D100'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator31.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set()
|
|
||||||
subtrack2 = set([TrackDisposition.DEFAULT])
|
|
||||||
else:
|
|
||||||
subtrack0 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack1 = set()
|
|
||||||
subtrack2 = set()
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
|
||||||
subtrack1.add(TrackDisposition.FORCED) # DESCRIPTIONS
|
|
||||||
# subtrack2.add(TrackDisposition.HEARING_IMPAIRED) # HEARING_IMPAIRED
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1,
|
|
||||||
subtrack2)
|
|
||||||
|
|
||||||
def createAssertFunc(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.HEARING_IMPAIRED)
|
|
||||||
# ), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
|
||||||
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
|
||||||
return f
|
|
||||||
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
@@ -1,89 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_3 import DispositionCombinator3
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator32(DispositionCombinator3):
|
|
||||||
|
|
||||||
VARIANT = 'D010'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator32.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack1 = set()
|
|
||||||
subtrack2 = set()
|
|
||||||
else:
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack2 = set()
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
|
||||||
# subtrack1.add(TrackDisposition.DESCRIPTIONS) # DESCRIPTIONS
|
|
||||||
subtrack2.add(TrackDisposition.FORCED) # HEARING_IMPAIRED
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1,
|
|
||||||
subtrack2)
|
|
||||||
|
|
||||||
|
|
||||||
def createAssertFunc(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DESCRIPTIONS)
|
|
||||||
# ), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved descriptions disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not set default disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
|
||||||
return f
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
@@ -1,89 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_3 import DispositionCombinator3
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator33(DispositionCombinator3):
|
|
||||||
|
|
||||||
VARIANT = 'D001'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator33.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack2 = set()
|
|
||||||
else:
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set()
|
|
||||||
subtrack2 = set([TrackDisposition.DEFAULT])
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
|
||||||
subtrack1.add(TrackDisposition.FORCED) # DESCRIPTIONS
|
|
||||||
# subtrack2.add(TrackDisposition.HEARING_IMPAIRED) # HEARING_IMPAIRED
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1,
|
|
||||||
subtrack2)
|
|
||||||
|
|
||||||
|
|
||||||
def createAssertFunc(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved descriptions disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.HEARING_IMPAIRED)
|
|
||||||
# ), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
|
||||||
return f
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
@@ -1,46 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_3 import DispositionCombinator3
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator34(DispositionCombinator3):
|
|
||||||
|
|
||||||
VARIANT = 'D101'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator34.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
subtrack0 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack1 = set()
|
|
||||||
subtrack2 = set([TrackDisposition.DEFAULT])
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0.add(TrackDisposition.FORCED) # COMMENT
|
|
||||||
# subtrack1.add(TrackDisposition.DESCRIPTIONS) # DESCRIPTIONS
|
|
||||||
# subtrack2.add(TrackDisposition.HEARING_IMPAIRED) # HEARING_IMPAIRED
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1,
|
|
||||||
subtrack2)
|
|
||||||
|
|
||||||
#TODO: tmpdb cases
|
|
||||||
def createAssertFunc(self):
|
|
||||||
def f(assertObj: dict = {}):
|
|
||||||
pass
|
|
||||||
return f
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return True
|
|
||||||
@@ -1,275 +0,0 @@
|
|||||||
import os, math, tempfile, click
|
|
||||||
|
|
||||||
from ffx.process import executeProcess
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.media_descriptor_change_set import MediaDescriptorChangeSet
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.helper import dictCache
|
|
||||||
from ffx.configuration_controller import ConfigurationController
|
|
||||||
|
|
||||||
|
|
||||||
SHORT_SUBTITLE_SEQUENCE = [{'start': 1, 'end': 2, 'text': 'yolo'},
|
|
||||||
{'start': 3, 'end': 4, 'text': 'zolo'},
|
|
||||||
{'start': 5, 'end': 6, 'text': 'golo'}]
|
|
||||||
|
|
||||||
def getTimeString(hours: float = 0.0,
|
|
||||||
minutes: float = 0.0,
|
|
||||||
seconds: float = 0.0,
|
|
||||||
millis: float = 0.0,
|
|
||||||
format: str = ''):
|
|
||||||
|
|
||||||
duration = (hours * 3600.0
|
|
||||||
+ minutes * 60.0
|
|
||||||
+ seconds
|
|
||||||
+ millis / 1000.0)
|
|
||||||
|
|
||||||
hours = math.floor(duration / 3600.0)
|
|
||||||
remaining = duration - 3600.0 * hours
|
|
||||||
|
|
||||||
minutes = math.floor(remaining / 60.0)
|
|
||||||
remaining = remaining - 60.0 * minutes
|
|
||||||
|
|
||||||
seconds = math.floor(remaining)
|
|
||||||
remaining = remaining - seconds
|
|
||||||
|
|
||||||
millis = math.floor(remaining * 1000)
|
|
||||||
|
|
||||||
if format == 'ass':
|
|
||||||
return f"{hours:01d}:{minutes:02d}:{seconds:02d}.{millis:02d}"
|
|
||||||
|
|
||||||
# srt & vtt
|
|
||||||
return f"{hours:02d}:{minutes:02d}:{seconds:02d}.{millis:03d}"
|
|
||||||
|
|
||||||
|
|
||||||
def createAssFile(entries: dict, directory = None):
|
|
||||||
|
|
||||||
# [Script Info]
|
|
||||||
# ; Script generated by FFmpeg/Lavc61.3.100
|
|
||||||
# ScriptType: v4.00+
|
|
||||||
# PlayResX: 384
|
|
||||||
# PlayResY: 288
|
|
||||||
# ScaledBorderAndShadow: yes
|
|
||||||
# YCbCr Matrix: None
|
|
||||||
#
|
|
||||||
# [V4+ Styles]
|
|
||||||
# Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding
|
|
||||||
# Style: Default,Arial,16,&Hffffff,&Hffffff,&H0,&H0,0,0,0,0,100,100,0,0,1,1,0,2,10,10,10,1
|
|
||||||
#
|
|
||||||
# [Events]
|
|
||||||
# Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
|
|
||||||
# Dialogue: 0,0:00:01.00,0:00:02.00,Default,,0,0,0,,yolo
|
|
||||||
# Dialogue: 0,0:00:03.00,0:00:04.00,Default,,0,0,0,,zolo
|
|
||||||
# Dialogue: 0,0:00:05.00,0:00:06.00,Default,,0,0,0,,golo
|
|
||||||
tmpFileName = tempfile.mktemp(suffix=".ass", dir = directory)
|
|
||||||
|
|
||||||
with open(tmpFileName, 'w') as tmpFile:
|
|
||||||
|
|
||||||
tmpFile.write("[Script Info]\n")
|
|
||||||
tmpFile.write("; Script generated by Ffx\n")
|
|
||||||
tmpFile.write("ScriptType: v4.00+\n")
|
|
||||||
tmpFile.write("PlayResX: 384\n")
|
|
||||||
tmpFile.write("PlayResY: 288\n")
|
|
||||||
tmpFile.write("ScaledBorderAndShadow: yes\n")
|
|
||||||
tmpFile.write("YCbCr Matrix: None\n")
|
|
||||||
tmpFile.write("\n")
|
|
||||||
tmpFile.write("[V4+ Styles]\n")
|
|
||||||
tmpFile.write("Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding\n")
|
|
||||||
tmpFile.write("Style: Default,Arial,16,&Hffffff,&Hffffff,&H0,&H0,0,0,0,0,100,100,0,0,1,1,0,2,10,10,10,1\n")
|
|
||||||
tmpFile.write("\n")
|
|
||||||
tmpFile.write("[Events]\n")
|
|
||||||
tmpFile.write("Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text\n")
|
|
||||||
|
|
||||||
for entryIndex in range(len(entries)):
|
|
||||||
tmpFile.write(f"Dialogue: 0,{getTimeString(seconds=entries[entryIndex]['start'], format='ass')},{getTimeString(seconds=entries[entryIndex]['end'], format='ass')},Default,,0,0,0,,{entries[entryIndex]['text']}\n")
|
|
||||||
|
|
||||||
return tmpFileName
|
|
||||||
|
|
||||||
def createSrtFile(entries: dict, directory = None):
|
|
||||||
# 1
|
|
||||||
# 00:00:00,000 --> 00:00:02,500
|
|
||||||
# Welcome to the Example Subtitle File!
|
|
||||||
#
|
|
||||||
# 2
|
|
||||||
# 00:00:03,000 --> 00:00:06,000
|
|
||||||
# This is a demonstration of SRT subtitles.
|
|
||||||
#
|
|
||||||
# 3
|
|
||||||
# 00:00:07,000 --> 00:00:10,500
|
|
||||||
# You can use SRT files to add subtitles to your videos.
|
|
||||||
|
|
||||||
tmpFileName = tempfile.mktemp(suffix=".srt", dir = directory)
|
|
||||||
|
|
||||||
with open(tmpFileName, 'w') as tmpFile:
|
|
||||||
|
|
||||||
for entryIndex in range(len(entries)):
|
|
||||||
|
|
||||||
tmpFile.write(f"{entryIndex}\n")
|
|
||||||
tmpFile.write(f"{getTimeString(seconds=entries[entryIndex]['start'])} --> {getTimeString(seconds=entries[entryIndex]['end'])}\n")
|
|
||||||
tmpFile.write(f"{entries[entryIndex]['text']}\n\n")
|
|
||||||
|
|
||||||
return tmpFileName
|
|
||||||
|
|
||||||
|
|
||||||
def createVttFile(entries: dict, directory = None):
|
|
||||||
# WEBVTT
|
|
||||||
#
|
|
||||||
# 01:20:33.050 --> 01:20:35.050
|
|
||||||
# Yolo
|
|
||||||
|
|
||||||
tmpFileName = tempfile.mktemp(suffix=".vtt", dir = directory)
|
|
||||||
|
|
||||||
with open(tmpFileName, 'w') as tmpFile:
|
|
||||||
|
|
||||||
tmpFile.write("WEBVTT\n")
|
|
||||||
|
|
||||||
for entryIndex in range(len(entries)):
|
|
||||||
|
|
||||||
tmpFile.write("\n")
|
|
||||||
tmpFile.write(f"{getTimeString(seconds=entries[entryIndex]['start'])} --> {getTimeString(seconds=entries[entryIndex]['end'])}\n")
|
|
||||||
tmpFile.write(f"{entries[entryIndex]['text']}\n")
|
|
||||||
|
|
||||||
|
|
||||||
return tmpFileName
|
|
||||||
|
|
||||||
|
|
||||||
def createMediaTestFile(mediaDescriptor: MediaDescriptor,
|
|
||||||
directory: str = '',
|
|
||||||
baseName: str = 'media',
|
|
||||||
format: str = '',
|
|
||||||
extension: str = 'mkv',
|
|
||||||
sizeX: int = 1280,
|
|
||||||
sizeY: int = 720,
|
|
||||||
rate: int = 25,
|
|
||||||
length: int = 10,
|
|
||||||
logger = None):
|
|
||||||
|
|
||||||
# subtitleFilePath = createVttFile(SHORT_SUBTITLE_SEQUENCE)
|
|
||||||
|
|
||||||
commandTokens = ['ffmpeg', '-y']
|
|
||||||
|
|
||||||
generatorCache = []
|
|
||||||
generatorTokens = []
|
|
||||||
mappingTokens = []
|
|
||||||
importTokens = []
|
|
||||||
metadataTokens = []
|
|
||||||
|
|
||||||
|
|
||||||
for mediaTagKey, mediaTagValue in mediaDescriptor.getTags().items():
|
|
||||||
metadataTokens += ['-metadata:g', f"{mediaTagKey}={mediaTagValue}"]
|
|
||||||
|
|
||||||
subIndexCounter = {}
|
|
||||||
|
|
||||||
# for trackDescriptor in mediaDescriptor.getAllTrackDescriptors():
|
|
||||||
for trackDescriptor in mediaDescriptor.getTrackDescriptors():
|
|
||||||
|
|
||||||
trackType = trackDescriptor.getType()
|
|
||||||
|
|
||||||
if trackType == TrackType.VIDEO:
|
|
||||||
|
|
||||||
cacheIndex, generatorCache = dictCache({'type': TrackType.VIDEO}, generatorCache)
|
|
||||||
# click.echo(f"createMediaTestFile() cache index={cacheIndex} size={len(generatorCache)}")
|
|
||||||
|
|
||||||
if cacheIndex == -1:
|
|
||||||
generatorTokens += ['-f',
|
|
||||||
'lavfi',
|
|
||||||
'-i',
|
|
||||||
f"color=size={sizeX}x{sizeY}:rate={rate}:color=black"]
|
|
||||||
|
|
||||||
sourceIndex = len(generatorCache) - 1 if cacheIndex == -1 else cacheIndex
|
|
||||||
mappingTokens += ['-map', f"{sourceIndex}:v:0"]
|
|
||||||
|
|
||||||
if not trackType in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
for mediaTagKey, mediaTagValue in trackDescriptor.getTags().items():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackType.indicator()}:{subIndexCounter[trackType]}",
|
|
||||||
f"{mediaTagKey}={mediaTagValue}"]
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
if trackType == TrackType.AUDIO:
|
|
||||||
|
|
||||||
audioLayout = 'stereo'
|
|
||||||
|
|
||||||
cacheIndex, generatorCache = dictCache({'type': TrackType.AUDIO, 'layout': audioLayout}, generatorCache)
|
|
||||||
# click.echo(f"createMediaTestFile() cache index={cacheIndex} size={len(generatorCache)}")
|
|
||||||
|
|
||||||
# click.echo(f"generartorCache index={cacheIndex} len={len(generatorCache)}")
|
|
||||||
if cacheIndex == -1:
|
|
||||||
generatorTokens += ['-f',
|
|
||||||
'lavfi',
|
|
||||||
'-i',
|
|
||||||
f"anullsrc=channel_layout={audioLayout}:sample_rate=44100"]
|
|
||||||
|
|
||||||
sourceIndex = len(generatorCache) - 1 if cacheIndex == -1 else cacheIndex
|
|
||||||
mappingTokens += ['-map', f"{sourceIndex}:a:0"]
|
|
||||||
|
|
||||||
if not trackType in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
for mediaTagKey, mediaTagValue in trackDescriptor.getTags().items():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackType.indicator()}:{subIndexCounter[trackType]}",
|
|
||||||
f"{mediaTagKey}={mediaTagValue}"]
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
if trackType == TrackType.SUBTITLE:
|
|
||||||
|
|
||||||
cacheIndex, generatorCache = dictCache({'type': TrackType.SUBTITLE}, generatorCache)
|
|
||||||
# click.echo(f"createMediaTestFile() cache index={cacheIndex} size={len(generatorCache)}")
|
|
||||||
|
|
||||||
if cacheIndex == -1:
|
|
||||||
importTokens = ['-i', createVttFile(SHORT_SUBTITLE_SEQUENCE, directory=directory if directory else None)]
|
|
||||||
|
|
||||||
sourceIndex = len(generatorCache) - 1 if cacheIndex == -1 else cacheIndex
|
|
||||||
mappingTokens += ['-map', f"{sourceIndex}:s:0"]
|
|
||||||
|
|
||||||
if not trackType in subIndexCounter.keys():
|
|
||||||
subIndexCounter[trackType] = 0
|
|
||||||
for mediaTagKey, mediaTagValue in trackDescriptor.getTags().items():
|
|
||||||
metadataTokens += [f"-metadata:s:{trackType.indicator()}:{subIndexCounter[trackType]}",
|
|
||||||
f"{mediaTagKey}={mediaTagValue}"]
|
|
||||||
subIndexCounter[trackType] += 1
|
|
||||||
|
|
||||||
ffxContext = {'config': ConfigurationController(), 'logger': logger}
|
|
||||||
mdcs = MediaDescriptorChangeSet(ffxContext, mediaDescriptor)
|
|
||||||
|
|
||||||
commandTokens += (generatorTokens
|
|
||||||
+ importTokens
|
|
||||||
+ mappingTokens
|
|
||||||
+ metadataTokens
|
|
||||||
+ mdcs.generateDispositionTokens())
|
|
||||||
|
|
||||||
|
|
||||||
commandTokens += ['-t', str(length)]
|
|
||||||
|
|
||||||
if format:
|
|
||||||
commandTokens += ['-f', format]
|
|
||||||
|
|
||||||
fileName = f"{baseName}.{extension}"
|
|
||||||
|
|
||||||
if directory:
|
|
||||||
outputPath = os.path.join(directory, fileName)
|
|
||||||
else:
|
|
||||||
outputPath = fileName
|
|
||||||
|
|
||||||
commandTokens += [outputPath]
|
|
||||||
|
|
||||||
|
|
||||||
ctx = {'logger': logger}
|
|
||||||
|
|
||||||
out, err, rc = executeProcess(commandTokens, context = ctx)
|
|
||||||
|
|
||||||
if not logger is None:
|
|
||||||
if out:
|
|
||||||
logger.debug(f"createMediaTestFile(): Process output: {out}")
|
|
||||||
if rc:
|
|
||||||
logger.debug(f"createMediaTestFile(): Process returned ERROR {rc} ({err})")
|
|
||||||
|
|
||||||
|
|
||||||
return outputPath
|
|
||||||
|
|
||||||
|
|
||||||
def createEmptyDirectory():
|
|
||||||
return tempfile.mkdtemp()
|
|
||||||
|
|
||||||
def createEmptyFile(suffix=None):
|
|
||||||
return tempfile.mkstemp(suffix=suffix)
|
|
||||||
@@ -1,43 +0,0 @@
|
|||||||
class IndicatorCombinator():
|
|
||||||
|
|
||||||
IDENTIFIER = 'indicator'
|
|
||||||
|
|
||||||
MAX_SEASON = 2
|
|
||||||
MAX_EPISODE = 3
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getIdentifier(self):
|
|
||||||
return IndicatorCombinator.IDENTIFIER
|
|
||||||
|
|
||||||
def getPayload(self, season: int = -1, episode: int = -1):
|
|
||||||
if season == -1 and episode == -1:
|
|
||||||
return {
|
|
||||||
'variant': 'S00E00',
|
|
||||||
'indicator': '',
|
|
||||||
'season': season,
|
|
||||||
'episode': episode
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
return {
|
|
||||||
'variant': f"S{season:02d}E{episode:02d}",
|
|
||||||
'indicator': f"S{season:02d}E{episode:02d}",
|
|
||||||
'season': season,
|
|
||||||
'episode': episode
|
|
||||||
}
|
|
||||||
|
|
||||||
def assertFunc(self, testObj = {}):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
|
|
||||||
yield self.getPayload()
|
|
||||||
for season in range(IndicatorCombinator.MAX_SEASON):
|
|
||||||
for episode in range(IndicatorCombinator.MAX_EPISODE):
|
|
||||||
yield self.getPayload(season+1, episode+1)
|
|
||||||
@@ -1,37 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
class LabelCombinator():
|
|
||||||
|
|
||||||
IDENTIFIER = 'label'
|
|
||||||
PREFIX = 'label_combinator_'
|
|
||||||
|
|
||||||
LABEL = 'ffx'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getIdentifier(self):
|
|
||||||
return LabelCombinator.IDENTIFIER
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
return [os.path.basename(p)[len(LabelCombinator.PREFIX):-3]
|
|
||||||
for p
|
|
||||||
in glob.glob(f"{ basePath }/{LabelCombinator.PREFIX}*.py", recursive = True)
|
|
||||||
if p != __file__]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(identifier):
|
|
||||||
module_name = f"tests.legacy.{LabelCombinator.PREFIX}{ identifier }"
|
|
||||||
importlib.import_module(module_name)
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[module_name]):
|
|
||||||
#HINT: Excluding MediaCombinator as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'LabelCombinator' and name.startswith('LabelCombinator'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [LabelCombinator.getClassReference(i) for i in LabelCombinator.list()]
|
|
||||||
@@ -1,30 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
|
|
||||||
from .label_combinator import LabelCombinator
|
|
||||||
|
|
||||||
|
|
||||||
class LabelCombinator0(LabelCombinator):
|
|
||||||
|
|
||||||
VARIANT = 'L0'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return LabelCombinator0.VARIANT
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
return ''
|
|
||||||
|
|
||||||
def assertFunc(self, testObj = {}):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
@@ -1,30 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
|
|
||||||
from .label_combinator import LabelCombinator
|
|
||||||
|
|
||||||
class LabelCombinator1(LabelCombinator):
|
|
||||||
|
|
||||||
VARIANT = 'L1'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return LabelCombinator1.VARIANT
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
return LabelCombinator.LABEL
|
|
||||||
|
|
||||||
def assertFunc(self, testObj = {}):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user