Compare commits
49 Commits
click-text
...
main
Author | SHA1 | Date |
---|---|---|
|
db7700a6b9 | 11 months ago |
|
222234f978 | 11 months ago |
|
3672474ff5 | 11 months ago |
|
5ff0fc3fad | 11 months ago |
|
554ca4cc03 | 11 months ago |
|
51febdfcc0 | 11 months ago |
|
0588f47837 | 11 months ago |
|
8b036af47f | 11 months ago |
|
c66d4c2568 | 11 months ago |
|
59dbb8985a | 11 months ago |
|
5e57e57ad2 | 11 months ago |
|
739acb0dd8 | 11 months ago |
|
e9e535044e | 11 months ago |
|
bc62801949 | 11 months ago |
|
e2b6a4bf7c | 11 months ago |
|
4716c4d11c | 11 months ago |
|
871c22d8e8 | 11 months ago |
|
e44fc93b4e | 11 months ago |
|
71943dbabc | 11 months ago |
|
0a026afba4 | 11 months ago |
|
a4b3db3eba | 11 months ago |
|
06978e6862 | 11 months ago |
|
457b599188 | 11 months ago |
|
125aeca340 | 11 months ago |
|
1caf4beb72 | 11 months ago |
|
49c1164b8b | 11 months ago |
|
0f635b8a86 | 11 months ago |
|
4b9f9a0364 | 11 months ago |
|
2f3658de5b | 11 months ago |
|
ec4af18e7a | 11 months ago |
|
58b01f2be7 | 11 months ago |
|
1d4507782b | 11 months ago |
|
8c7eee580d | 11 months ago |
|
303fd4bc80 | 11 months ago |
|
5febb96916 | 11 months ago |
|
b16e76370b | 11 months ago |
|
feb5441251 | 11 months ago |
|
ea182d4ddb | 11 months ago |
|
f853cf0f85 | 11 months ago |
|
b492be227a | 11 months ago |
|
7fe5b66c0c | 11 months ago |
|
07cc0cd95e | 11 months ago |
|
de2d7c0593 | 11 months ago |
|
826677cb03 | 11 months ago |
|
95aeacf694 | 11 months ago |
|
a3bb16e850 | 11 months ago |
|
0ed85fce4a | 11 months ago |
|
1a0a5f4482 | 11 months ago |
|
06f6322d32 | 11 months ago |
@ -1,4 +1,10 @@
|
|||||||
__pycache__
|
__pycache__
|
||||||
junk/
|
junk/
|
||||||
.vscode/launch.json
|
.vscode
|
||||||
.ipynb_checkpoints/
|
.ipynb_checkpoints/
|
||||||
|
ansible/inventory/hawaii.yml
|
||||||
|
ansible/inventory/peppermint.yml
|
||||||
|
ffx_test_report.log
|
||||||
|
bin/conversiontest.py
|
||||||
|
*.egg-info/
|
||||||
|
|
||||||
|
@ -0,0 +1,595 @@
|
|||||||
|
GNU General Public License
|
||||||
|
==========================
|
||||||
|
|
||||||
|
_Version 3, 29 June 2007_
|
||||||
|
_Copyright © 2007 Free Software Foundation, Inc. <<http://fsf.org/>>_
|
||||||
|
|
||||||
|
Everyone is permitted to copy and distribute verbatim copies of this license
|
||||||
|
document, but changing it is not allowed.
|
||||||
|
|
||||||
|
## Preamble
|
||||||
|
|
||||||
|
The GNU General Public License is a free, copyleft license for software and other
|
||||||
|
kinds of works.
|
||||||
|
|
||||||
|
The licenses for most software and other practical works are designed to take away
|
||||||
|
your freedom to share and change the works. By contrast, the GNU General Public
|
||||||
|
License is intended to guarantee your freedom to share and change all versions of a
|
||||||
|
program--to make sure it remains free software for all its users. We, the Free
|
||||||
|
Software Foundation, use the GNU General Public License for most of our software; it
|
||||||
|
applies also to any other work released this way by its authors. You can apply it to
|
||||||
|
your programs, too.
|
||||||
|
|
||||||
|
When we speak of free software, we are referring to freedom, not price. Our General
|
||||||
|
Public Licenses are designed to make sure that you have the freedom to distribute
|
||||||
|
copies of free software (and charge for them if you wish), that you receive source
|
||||||
|
code or can get it if you want it, that you can change the software or use pieces of
|
||||||
|
it in new free programs, and that you know you can do these things.
|
||||||
|
|
||||||
|
To protect your rights, we need to prevent others from denying you these rights or
|
||||||
|
asking you to surrender the rights. Therefore, you have certain responsibilities if
|
||||||
|
you distribute copies of the software, or if you modify it: responsibilities to
|
||||||
|
respect the freedom of others.
|
||||||
|
|
||||||
|
For example, if you distribute copies of such a program, whether gratis or for a fee,
|
||||||
|
you must pass on to the recipients the same freedoms that you received. You must make
|
||||||
|
sure that they, too, receive or can get the source code. And you must show them these
|
||||||
|
terms so they know their rights.
|
||||||
|
|
||||||
|
Developers that use the GNU GPL protect your rights with two steps: **(1)** assert
|
||||||
|
copyright on the software, and **(2)** offer you this License giving you legal permission
|
||||||
|
to copy, distribute and/or modify it.
|
||||||
|
|
||||||
|
For the developers' and authors' protection, the GPL clearly explains that there is
|
||||||
|
no warranty for this free software. For both users' and authors' sake, the GPL
|
||||||
|
requires that modified versions be marked as changed, so that their problems will not
|
||||||
|
be attributed erroneously to authors of previous versions.
|
||||||
|
|
||||||
|
Some devices are designed to deny users access to install or run modified versions of
|
||||||
|
the software inside them, although the manufacturer can do so. This is fundamentally
|
||||||
|
incompatible with the aim of protecting users' freedom to change the software. The
|
||||||
|
systematic pattern of such abuse occurs in the area of products for individuals to
|
||||||
|
use, which is precisely where it is most unacceptable. Therefore, we have designed
|
||||||
|
this version of the GPL to prohibit the practice for those products. If such problems
|
||||||
|
arise substantially in other domains, we stand ready to extend this provision to
|
||||||
|
those domains in future versions of the GPL, as needed to protect the freedom of
|
||||||
|
users.
|
||||||
|
|
||||||
|
Finally, every program is threatened constantly by software patents. States should
|
||||||
|
not allow patents to restrict development and use of software on general-purpose
|
||||||
|
computers, but in those that do, we wish to avoid the special danger that patents
|
||||||
|
applied to a free program could make it effectively proprietary. To prevent this, the
|
||||||
|
GPL assures that patents cannot be used to render the program non-free.
|
||||||
|
|
||||||
|
The precise terms and conditions for copying, distribution and modification follow.
|
||||||
|
|
||||||
|
## TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
### 0. Definitions
|
||||||
|
|
||||||
|
“This License” refers to version 3 of the GNU General Public License.
|
||||||
|
|
||||||
|
“Copyright” also means copyright-like laws that apply to other kinds of
|
||||||
|
works, such as semiconductor masks.
|
||||||
|
|
||||||
|
“The Program” refers to any copyrightable work licensed under this
|
||||||
|
License. Each licensee is addressed as “you”. “Licensees” and
|
||||||
|
“recipients” may be individuals or organizations.
|
||||||
|
|
||||||
|
To “modify” a work means to copy from or adapt all or part of the work in
|
||||||
|
a fashion requiring copyright permission, other than the making of an exact copy. The
|
||||||
|
resulting work is called a “modified version” of the earlier work or a
|
||||||
|
work “based on” the earlier work.
|
||||||
|
|
||||||
|
A “covered work” means either the unmodified Program or a work based on
|
||||||
|
the Program.
|
||||||
|
|
||||||
|
To “propagate” a work means to do anything with it that, without
|
||||||
|
permission, would make you directly or secondarily liable for infringement under
|
||||||
|
applicable copyright law, except executing it on a computer or modifying a private
|
||||||
|
copy. Propagation includes copying, distribution (with or without modification),
|
||||||
|
making available to the public, and in some countries other activities as well.
|
||||||
|
|
||||||
|
To “convey” a work means any kind of propagation that enables other
|
||||||
|
parties to make or receive copies. Mere interaction with a user through a computer
|
||||||
|
network, with no transfer of a copy, is not conveying.
|
||||||
|
|
||||||
|
An interactive user interface displays “Appropriate Legal Notices” to the
|
||||||
|
extent that it includes a convenient and prominently visible feature that **(1)**
|
||||||
|
displays an appropriate copyright notice, and **(2)** tells the user that there is no
|
||||||
|
warranty for the work (except to the extent that warranties are provided), that
|
||||||
|
licensees may convey the work under this License, and how to view a copy of this
|
||||||
|
License. If the interface presents a list of user commands or options, such as a
|
||||||
|
menu, a prominent item in the list meets this criterion.
|
||||||
|
|
||||||
|
### 1. Source Code
|
||||||
|
|
||||||
|
The “source code” for a work means the preferred form of the work for
|
||||||
|
making modifications to it. “Object code” means any non-source form of a
|
||||||
|
work.
|
||||||
|
|
||||||
|
A “Standard Interface” means an interface that either is an official
|
||||||
|
standard defined by a recognized standards body, or, in the case of interfaces
|
||||||
|
specified for a particular programming language, one that is widely used among
|
||||||
|
developers working in that language.
|
||||||
|
|
||||||
|
The “System Libraries” of an executable work include anything, other than
|
||||||
|
the work as a whole, that **(a)** is included in the normal form of packaging a Major
|
||||||
|
Component, but which is not part of that Major Component, and **(b)** serves only to
|
||||||
|
enable use of the work with that Major Component, or to implement a Standard
|
||||||
|
Interface for which an implementation is available to the public in source code form.
|
||||||
|
A “Major Component”, in this context, means a major essential component
|
||||||
|
(kernel, window system, and so on) of the specific operating system (if any) on which
|
||||||
|
the executable work runs, or a compiler used to produce the work, or an object code
|
||||||
|
interpreter used to run it.
|
||||||
|
|
||||||
|
The “Corresponding Source” for a work in object code form means all the
|
||||||
|
source code needed to generate, install, and (for an executable work) run the object
|
||||||
|
code and to modify the work, including scripts to control those activities. However,
|
||||||
|
it does not include the work's System Libraries, or general-purpose tools or
|
||||||
|
generally available free programs which are used unmodified in performing those
|
||||||
|
activities but which are not part of the work. For example, Corresponding Source
|
||||||
|
includes interface definition files associated with source files for the work, and
|
||||||
|
the source code for shared libraries and dynamically linked subprograms that the work
|
||||||
|
is specifically designed to require, such as by intimate data communication or
|
||||||
|
control flow between those subprograms and other parts of the work.
|
||||||
|
|
||||||
|
The Corresponding Source need not include anything that users can regenerate
|
||||||
|
automatically from other parts of the Corresponding Source.
|
||||||
|
|
||||||
|
The Corresponding Source for a work in source code form is that same work.
|
||||||
|
|
||||||
|
### 2. Basic Permissions
|
||||||
|
|
||||||
|
All rights granted under this License are granted for the term of copyright on the
|
||||||
|
Program, and are irrevocable provided the stated conditions are met. This License
|
||||||
|
explicitly affirms your unlimited permission to run the unmodified Program. The
|
||||||
|
output from running a covered work is covered by this License only if the output,
|
||||||
|
given its content, constitutes a covered work. This License acknowledges your rights
|
||||||
|
of fair use or other equivalent, as provided by copyright law.
|
||||||
|
|
||||||
|
You may make, run and propagate covered works that you do not convey, without
|
||||||
|
conditions so long as your license otherwise remains in force. You may convey covered
|
||||||
|
works to others for the sole purpose of having them make modifications exclusively
|
||||||
|
for you, or provide you with facilities for running those works, provided that you
|
||||||
|
comply with the terms of this License in conveying all material for which you do not
|
||||||
|
control copyright. Those thus making or running the covered works for you must do so
|
||||||
|
exclusively on your behalf, under your direction and control, on terms that prohibit
|
||||||
|
them from making any copies of your copyrighted material outside their relationship
|
||||||
|
with you.
|
||||||
|
|
||||||
|
Conveying under any other circumstances is permitted solely under the conditions
|
||||||
|
stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
|
||||||
|
|
||||||
|
### 3. Protecting Users' Legal Rights From Anti-Circumvention Law
|
||||||
|
|
||||||
|
No covered work shall be deemed part of an effective technological measure under any
|
||||||
|
applicable law fulfilling obligations under article 11 of the WIPO copyright treaty
|
||||||
|
adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention
|
||||||
|
of such measures.
|
||||||
|
|
||||||
|
When you convey a covered work, you waive any legal power to forbid circumvention of
|
||||||
|
technological measures to the extent such circumvention is effected by exercising
|
||||||
|
rights under this License with respect to the covered work, and you disclaim any
|
||||||
|
intention to limit operation or modification of the work as a means of enforcing,
|
||||||
|
against the work's users, your or third parties' legal rights to forbid circumvention
|
||||||
|
of technological measures.
|
||||||
|
|
||||||
|
### 4. Conveying Verbatim Copies
|
||||||
|
|
||||||
|
You may convey verbatim copies of the Program's source code as you receive it, in any
|
||||||
|
medium, provided that you conspicuously and appropriately publish on each copy an
|
||||||
|
appropriate copyright notice; keep intact all notices stating that this License and
|
||||||
|
any non-permissive terms added in accord with section 7 apply to the code; keep
|
||||||
|
intact all notices of the absence of any warranty; and give all recipients a copy of
|
||||||
|
this License along with the Program.
|
||||||
|
|
||||||
|
You may charge any price or no price for each copy that you convey, and you may offer
|
||||||
|
support or warranty protection for a fee.
|
||||||
|
|
||||||
|
### 5. Conveying Modified Source Versions
|
||||||
|
|
||||||
|
You may convey a work based on the Program, or the modifications to produce it from
|
||||||
|
the Program, in the form of source code under the terms of section 4, provided that
|
||||||
|
you also meet all of these conditions:
|
||||||
|
|
||||||
|
* **a)** The work must carry prominent notices stating that you modified it, and giving a
|
||||||
|
relevant date.
|
||||||
|
* **b)** The work must carry prominent notices stating that it is released under this
|
||||||
|
License and any conditions added under section 7. This requirement modifies the
|
||||||
|
requirement in section 4 to “keep intact all notices”.
|
||||||
|
* **c)** You must license the entire work, as a whole, under this License to anyone who
|
||||||
|
comes into possession of a copy. This License will therefore apply, along with any
|
||||||
|
applicable section 7 additional terms, to the whole of the work, and all its parts,
|
||||||
|
regardless of how they are packaged. This License gives no permission to license the
|
||||||
|
work in any other way, but it does not invalidate such permission if you have
|
||||||
|
separately received it.
|
||||||
|
* **d)** If the work has interactive user interfaces, each must display Appropriate Legal
|
||||||
|
Notices; however, if the Program has interactive interfaces that do not display
|
||||||
|
Appropriate Legal Notices, your work need not make them do so.
|
||||||
|
|
||||||
|
A compilation of a covered work with other separate and independent works, which are
|
||||||
|
not by their nature extensions of the covered work, and which are not combined with
|
||||||
|
it such as to form a larger program, in or on a volume of a storage or distribution
|
||||||
|
medium, is called an “aggregate” if the compilation and its resulting
|
||||||
|
copyright are not used to limit the access or legal rights of the compilation's users
|
||||||
|
beyond what the individual works permit. Inclusion of a covered work in an aggregate
|
||||||
|
does not cause this License to apply to the other parts of the aggregate.
|
||||||
|
|
||||||
|
### 6. Conveying Non-Source Forms
|
||||||
|
|
||||||
|
You may convey a covered work in object code form under the terms of sections 4 and
|
||||||
|
5, provided that you also convey the machine-readable Corresponding Source under the
|
||||||
|
terms of this License, in one of these ways:
|
||||||
|
|
||||||
|
* **a)** Convey the object code in, or embodied in, a physical product (including a
|
||||||
|
physical distribution medium), accompanied by the Corresponding Source fixed on a
|
||||||
|
durable physical medium customarily used for software interchange.
|
||||||
|
* **b)** Convey the object code in, or embodied in, a physical product (including a
|
||||||
|
physical distribution medium), accompanied by a written offer, valid for at least
|
||||||
|
three years and valid for as long as you offer spare parts or customer support for
|
||||||
|
that product model, to give anyone who possesses the object code either **(1)** a copy of
|
||||||
|
the Corresponding Source for all the software in the product that is covered by this
|
||||||
|
License, on a durable physical medium customarily used for software interchange, for
|
||||||
|
a price no more than your reasonable cost of physically performing this conveying of
|
||||||
|
source, or **(2)** access to copy the Corresponding Source from a network server at no
|
||||||
|
charge.
|
||||||
|
* **c)** Convey individual copies of the object code with a copy of the written offer to
|
||||||
|
provide the Corresponding Source. This alternative is allowed only occasionally and
|
||||||
|
noncommercially, and only if you received the object code with such an offer, in
|
||||||
|
accord with subsection 6b.
|
||||||
|
* **d)** Convey the object code by offering access from a designated place (gratis or for
|
||||||
|
a charge), and offer equivalent access to the Corresponding Source in the same way
|
||||||
|
through the same place at no further charge. You need not require recipients to copy
|
||||||
|
the Corresponding Source along with the object code. If the place to copy the object
|
||||||
|
code is a network server, the Corresponding Source may be on a different server
|
||||||
|
(operated by you or a third party) that supports equivalent copying facilities,
|
||||||
|
provided you maintain clear directions next to the object code saying where to find
|
||||||
|
the Corresponding Source. Regardless of what server hosts the Corresponding Source,
|
||||||
|
you remain obligated to ensure that it is available for as long as needed to satisfy
|
||||||
|
these requirements.
|
||||||
|
* **e)** Convey the object code using peer-to-peer transmission, provided you inform
|
||||||
|
other peers where the object code and Corresponding Source of the work are being
|
||||||
|
offered to the general public at no charge under subsection 6d.
|
||||||
|
|
||||||
|
A separable portion of the object code, whose source code is excluded from the
|
||||||
|
Corresponding Source as a System Library, need not be included in conveying the
|
||||||
|
object code work.
|
||||||
|
|
||||||
|
A “User Product” is either **(1)** a “consumer product”, which
|
||||||
|
means any tangible personal property which is normally used for personal, family, or
|
||||||
|
household purposes, or **(2)** anything designed or sold for incorporation into a
|
||||||
|
dwelling. In determining whether a product is a consumer product, doubtful cases
|
||||||
|
shall be resolved in favor of coverage. For a particular product received by a
|
||||||
|
particular user, “normally used” refers to a typical or common use of
|
||||||
|
that class of product, regardless of the status of the particular user or of the way
|
||||||
|
in which the particular user actually uses, or expects or is expected to use, the
|
||||||
|
product. A product is a consumer product regardless of whether the product has
|
||||||
|
substantial commercial, industrial or non-consumer uses, unless such uses represent
|
||||||
|
the only significant mode of use of the product.
|
||||||
|
|
||||||
|
“Installation Information” for a User Product means any methods,
|
||||||
|
procedures, authorization keys, or other information required to install and execute
|
||||||
|
modified versions of a covered work in that User Product from a modified version of
|
||||||
|
its Corresponding Source. The information must suffice to ensure that the continued
|
||||||
|
functioning of the modified object code is in no case prevented or interfered with
|
||||||
|
solely because modification has been made.
|
||||||
|
|
||||||
|
If you convey an object code work under this section in, or with, or specifically for
|
||||||
|
use in, a User Product, and the conveying occurs as part of a transaction in which
|
||||||
|
the right of possession and use of the User Product is transferred to the recipient
|
||||||
|
in perpetuity or for a fixed term (regardless of how the transaction is
|
||||||
|
characterized), the Corresponding Source conveyed under this section must be
|
||||||
|
accompanied by the Installation Information. But this requirement does not apply if
|
||||||
|
neither you nor any third party retains the ability to install modified object code
|
||||||
|
on the User Product (for example, the work has been installed in ROM).
|
||||||
|
|
||||||
|
The requirement to provide Installation Information does not include a requirement to
|
||||||
|
continue to provide support service, warranty, or updates for a work that has been
|
||||||
|
modified or installed by the recipient, or for the User Product in which it has been
|
||||||
|
modified or installed. Access to a network may be denied when the modification itself
|
||||||
|
materially and adversely affects the operation of the network or violates the rules
|
||||||
|
and protocols for communication across the network.
|
||||||
|
|
||||||
|
Corresponding Source conveyed, and Installation Information provided, in accord with
|
||||||
|
this section must be in a format that is publicly documented (and with an
|
||||||
|
implementation available to the public in source code form), and must require no
|
||||||
|
special password or key for unpacking, reading or copying.
|
||||||
|
|
||||||
|
### 7. Additional Terms
|
||||||
|
|
||||||
|
“Additional permissions” are terms that supplement the terms of this
|
||||||
|
License by making exceptions from one or more of its conditions. Additional
|
||||||
|
permissions that are applicable to the entire Program shall be treated as though they
|
||||||
|
were included in this License, to the extent that they are valid under applicable
|
||||||
|
law. If additional permissions apply only to part of the Program, that part may be
|
||||||
|
used separately under those permissions, but the entire Program remains governed by
|
||||||
|
this License without regard to the additional permissions.
|
||||||
|
|
||||||
|
When you convey a copy of a covered work, you may at your option remove any
|
||||||
|
additional permissions from that copy, or from any part of it. (Additional
|
||||||
|
permissions may be written to require their own removal in certain cases when you
|
||||||
|
modify the work.) You may place additional permissions on material, added by you to a
|
||||||
|
covered work, for which you have or can give appropriate copyright permission.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, for material you add to a
|
||||||
|
covered work, you may (if authorized by the copyright holders of that material)
|
||||||
|
supplement the terms of this License with terms:
|
||||||
|
|
||||||
|
* **a)** Disclaiming warranty or limiting liability differently from the terms of
|
||||||
|
sections 15 and 16 of this License; or
|
||||||
|
* **b)** Requiring preservation of specified reasonable legal notices or author
|
||||||
|
attributions in that material or in the Appropriate Legal Notices displayed by works
|
||||||
|
containing it; or
|
||||||
|
* **c)** Prohibiting misrepresentation of the origin of that material, or requiring that
|
||||||
|
modified versions of such material be marked in reasonable ways as different from the
|
||||||
|
original version; or
|
||||||
|
* **d)** Limiting the use for publicity purposes of names of licensors or authors of the
|
||||||
|
material; or
|
||||||
|
* **e)** Declining to grant rights under trademark law for use of some trade names,
|
||||||
|
trademarks, or service marks; or
|
||||||
|
* **f)** Requiring indemnification of licensors and authors of that material by anyone
|
||||||
|
who conveys the material (or modified versions of it) with contractual assumptions of
|
||||||
|
liability to the recipient, for any liability that these contractual assumptions
|
||||||
|
directly impose on those licensors and authors.
|
||||||
|
|
||||||
|
All other non-permissive additional terms are considered “further
|
||||||
|
restrictions” within the meaning of section 10. If the Program as you received
|
||||||
|
it, or any part of it, contains a notice stating that it is governed by this License
|
||||||
|
along with a term that is a further restriction, you may remove that term. If a
|
||||||
|
license document contains a further restriction but permits relicensing or conveying
|
||||||
|
under this License, you may add to a covered work material governed by the terms of
|
||||||
|
that license document, provided that the further restriction does not survive such
|
||||||
|
relicensing or conveying.
|
||||||
|
|
||||||
|
If you add terms to a covered work in accord with this section, you must place, in
|
||||||
|
the relevant source files, a statement of the additional terms that apply to those
|
||||||
|
files, or a notice indicating where to find the applicable terms.
|
||||||
|
|
||||||
|
Additional terms, permissive or non-permissive, may be stated in the form of a
|
||||||
|
separately written license, or stated as exceptions; the above requirements apply
|
||||||
|
either way.
|
||||||
|
|
||||||
|
### 8. Termination
|
||||||
|
|
||||||
|
You may not propagate or modify a covered work except as expressly provided under
|
||||||
|
this License. Any attempt otherwise to propagate or modify it is void, and will
|
||||||
|
automatically terminate your rights under this License (including any patent licenses
|
||||||
|
granted under the third paragraph of section 11).
|
||||||
|
|
||||||
|
However, if you cease all violation of this License, then your license from a
|
||||||
|
particular copyright holder is reinstated **(a)** provisionally, unless and until the
|
||||||
|
copyright holder explicitly and finally terminates your license, and **(b)** permanently,
|
||||||
|
if the copyright holder fails to notify you of the violation by some reasonable means
|
||||||
|
prior to 60 days after the cessation.
|
||||||
|
|
||||||
|
Moreover, your license from a particular copyright holder is reinstated permanently
|
||||||
|
if the copyright holder notifies you of the violation by some reasonable means, this
|
||||||
|
is the first time you have received notice of violation of this License (for any
|
||||||
|
work) from that copyright holder, and you cure the violation prior to 30 days after
|
||||||
|
your receipt of the notice.
|
||||||
|
|
||||||
|
Termination of your rights under this section does not terminate the licenses of
|
||||||
|
parties who have received copies or rights from you under this License. If your
|
||||||
|
rights have been terminated and not permanently reinstated, you do not qualify to
|
||||||
|
receive new licenses for the same material under section 10.
|
||||||
|
|
||||||
|
### 9. Acceptance Not Required for Having Copies
|
||||||
|
|
||||||
|
You are not required to accept this License in order to receive or run a copy of the
|
||||||
|
Program. Ancillary propagation of a covered work occurring solely as a consequence of
|
||||||
|
using peer-to-peer transmission to receive a copy likewise does not require
|
||||||
|
acceptance. However, nothing other than this License grants you permission to
|
||||||
|
propagate or modify any covered work. These actions infringe copyright if you do not
|
||||||
|
accept this License. Therefore, by modifying or propagating a covered work, you
|
||||||
|
indicate your acceptance of this License to do so.
|
||||||
|
|
||||||
|
### 10. Automatic Licensing of Downstream Recipients
|
||||||
|
|
||||||
|
Each time you convey a covered work, the recipient automatically receives a license
|
||||||
|
from the original licensors, to run, modify and propagate that work, subject to this
|
||||||
|
License. You are not responsible for enforcing compliance by third parties with this
|
||||||
|
License.
|
||||||
|
|
||||||
|
An “entity transaction” is a transaction transferring control of an
|
||||||
|
organization, or substantially all assets of one, or subdividing an organization, or
|
||||||
|
merging organizations. If propagation of a covered work results from an entity
|
||||||
|
transaction, each party to that transaction who receives a copy of the work also
|
||||||
|
receives whatever licenses to the work the party's predecessor in interest had or
|
||||||
|
could give under the previous paragraph, plus a right to possession of the
|
||||||
|
Corresponding Source of the work from the predecessor in interest, if the predecessor
|
||||||
|
has it or can get it with reasonable efforts.
|
||||||
|
|
||||||
|
You may not impose any further restrictions on the exercise of the rights granted or
|
||||||
|
affirmed under this License. For example, you may not impose a license fee, royalty,
|
||||||
|
or other charge for exercise of rights granted under this License, and you may not
|
||||||
|
initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging
|
||||||
|
that any patent claim is infringed by making, using, selling, offering for sale, or
|
||||||
|
importing the Program or any portion of it.
|
||||||
|
|
||||||
|
### 11. Patents
|
||||||
|
|
||||||
|
A “contributor” is a copyright holder who authorizes use under this
|
||||||
|
License of the Program or a work on which the Program is based. The work thus
|
||||||
|
licensed is called the contributor's “contributor version”.
|
||||||
|
|
||||||
|
A contributor's “essential patent claims” are all patent claims owned or
|
||||||
|
controlled by the contributor, whether already acquired or hereafter acquired, that
|
||||||
|
would be infringed by some manner, permitted by this License, of making, using, or
|
||||||
|
selling its contributor version, but do not include claims that would be infringed
|
||||||
|
only as a consequence of further modification of the contributor version. For
|
||||||
|
purposes of this definition, “control” includes the right to grant patent
|
||||||
|
sublicenses in a manner consistent with the requirements of this License.
|
||||||
|
|
||||||
|
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license
|
||||||
|
under the contributor's essential patent claims, to make, use, sell, offer for sale,
|
||||||
|
import and otherwise run, modify and propagate the contents of its contributor
|
||||||
|
version.
|
||||||
|
|
||||||
|
In the following three paragraphs, a “patent license” is any express
|
||||||
|
agreement or commitment, however denominated, not to enforce a patent (such as an
|
||||||
|
express permission to practice a patent or covenant not to sue for patent
|
||||||
|
infringement). To “grant” such a patent license to a party means to make
|
||||||
|
such an agreement or commitment not to enforce a patent against the party.
|
||||||
|
|
||||||
|
If you convey a covered work, knowingly relying on a patent license, and the
|
||||||
|
Corresponding Source of the work is not available for anyone to copy, free of charge
|
||||||
|
and under the terms of this License, through a publicly available network server or
|
||||||
|
other readily accessible means, then you must either **(1)** cause the Corresponding
|
||||||
|
Source to be so available, or **(2)** arrange to deprive yourself of the benefit of the
|
||||||
|
patent license for this particular work, or **(3)** arrange, in a manner consistent with
|
||||||
|
the requirements of this License, to extend the patent license to downstream
|
||||||
|
recipients. “Knowingly relying” means you have actual knowledge that, but
|
||||||
|
for the patent license, your conveying the covered work in a country, or your
|
||||||
|
recipient's use of the covered work in a country, would infringe one or more
|
||||||
|
identifiable patents in that country that you have reason to believe are valid.
|
||||||
|
|
||||||
|
If, pursuant to or in connection with a single transaction or arrangement, you
|
||||||
|
convey, or propagate by procuring conveyance of, a covered work, and grant a patent
|
||||||
|
license to some of the parties receiving the covered work authorizing them to use,
|
||||||
|
propagate, modify or convey a specific copy of the covered work, then the patent
|
||||||
|
license you grant is automatically extended to all recipients of the covered work and
|
||||||
|
works based on it.
|
||||||
|
|
||||||
|
A patent license is “discriminatory” if it does not include within the
|
||||||
|
scope of its coverage, prohibits the exercise of, or is conditioned on the
|
||||||
|
non-exercise of one or more of the rights that are specifically granted under this
|
||||||
|
License. You may not convey a covered work if you are a party to an arrangement with
|
||||||
|
a third party that is in the business of distributing software, under which you make
|
||||||
|
payment to the third party based on the extent of your activity of conveying the
|
||||||
|
work, and under which the third party grants, to any of the parties who would receive
|
||||||
|
the covered work from you, a discriminatory patent license **(a)** in connection with
|
||||||
|
copies of the covered work conveyed by you (or copies made from those copies), or **(b)**
|
||||||
|
primarily for and in connection with specific products or compilations that contain
|
||||||
|
the covered work, unless you entered into that arrangement, or that patent license
|
||||||
|
was granted, prior to 28 March 2007.
|
||||||
|
|
||||||
|
Nothing in this License shall be construed as excluding or limiting any implied
|
||||||
|
license or other defenses to infringement that may otherwise be available to you
|
||||||
|
under applicable patent law.
|
||||||
|
|
||||||
|
### 12. No Surrender of Others' Freedom
|
||||||
|
|
||||||
|
If conditions are imposed on you (whether by court order, agreement or otherwise)
|
||||||
|
that contradict the conditions of this License, they do not excuse you from the
|
||||||
|
conditions of this License. If you cannot convey a covered work so as to satisfy
|
||||||
|
simultaneously your obligations under this License and any other pertinent
|
||||||
|
obligations, then as a consequence you may not convey it at all. For example, if you
|
||||||
|
agree to terms that obligate you to collect a royalty for further conveying from
|
||||||
|
those to whom you convey the Program, the only way you could satisfy both those terms
|
||||||
|
and this License would be to refrain entirely from conveying the Program.
|
||||||
|
|
||||||
|
### 13. Use with the GNU Affero General Public License
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, you have permission to link or
|
||||||
|
combine any covered work with a work licensed under version 3 of the GNU Affero
|
||||||
|
General Public License into a single combined work, and to convey the resulting work.
|
||||||
|
The terms of this License will continue to apply to the part which is the covered
|
||||||
|
work, but the special requirements of the GNU Affero General Public License, section
|
||||||
|
13, concerning interaction through a network will apply to the combination as such.
|
||||||
|
|
||||||
|
### 14. Revised Versions of this License
|
||||||
|
|
||||||
|
The Free Software Foundation may publish revised and/or new versions of the GNU
|
||||||
|
General Public License from time to time. Such new versions will be similar in spirit
|
||||||
|
to the present version, but may differ in detail to address new problems or concerns.
|
||||||
|
|
||||||
|
Each version is given a distinguishing version number. If the Program specifies that
|
||||||
|
a certain numbered version of the GNU General Public License “or any later
|
||||||
|
version” applies to it, you have the option of following the terms and
|
||||||
|
conditions either of that numbered version or of any later version published by the
|
||||||
|
Free Software Foundation. If the Program does not specify a version number of the GNU
|
||||||
|
General Public License, you may choose any version ever published by the Free
|
||||||
|
Software Foundation.
|
||||||
|
|
||||||
|
If the Program specifies that a proxy can decide which future versions of the GNU
|
||||||
|
General Public License can be used, that proxy's public statement of acceptance of a
|
||||||
|
version permanently authorizes you to choose that version for the Program.
|
||||||
|
|
||||||
|
Later license versions may give you additional or different permissions. However, no
|
||||||
|
additional obligations are imposed on any author or copyright holder as a result of
|
||||||
|
your choosing to follow a later version.
|
||||||
|
|
||||||
|
### 15. Disclaimer of Warranty
|
||||||
|
|
||||||
|
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
|
||||||
|
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
|
||||||
|
PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
|
||||||
|
EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||||
|
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE
|
||||||
|
QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
|
||||||
|
DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||||
|
|
||||||
|
### 16. Limitation of Liability
|
||||||
|
|
||||||
|
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY
|
||||||
|
COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS
|
||||||
|
PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL,
|
||||||
|
INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
|
||||||
|
PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE
|
||||||
|
OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE
|
||||||
|
WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
|
||||||
|
POSSIBILITY OF SUCH DAMAGES.
|
||||||
|
|
||||||
|
### 17. Interpretation of Sections 15 and 16
|
||||||
|
|
||||||
|
If the disclaimer of warranty and limitation of liability provided above cannot be
|
||||||
|
given local legal effect according to their terms, reviewing courts shall apply local
|
||||||
|
law that most closely approximates an absolute waiver of all civil liability in
|
||||||
|
connection with the Program, unless a warranty or assumption of liability accompanies
|
||||||
|
a copy of the Program in return for a fee.
|
||||||
|
|
||||||
|
_END OF TERMS AND CONDITIONS_
|
||||||
|
|
||||||
|
## How to Apply These Terms to Your New Programs
|
||||||
|
|
||||||
|
If you develop a new program, and you want it to be of the greatest possible use to
|
||||||
|
the public, the best way to achieve this is to make it free software which everyone
|
||||||
|
can redistribute and change under these terms.
|
||||||
|
|
||||||
|
To do so, attach the following notices to the program. It is safest to attach them
|
||||||
|
to the start of each source file to most effectively state the exclusion of warranty;
|
||||||
|
and each file should have at least the “copyright” line and a pointer to
|
||||||
|
where the full notice is found.
|
||||||
|
|
||||||
|
<one line to give the program's name and a brief idea of what it does.>
|
||||||
|
Copyright (C) <year> <name of author>
|
||||||
|
|
||||||
|
This program is free software: you can redistribute it and/or modify
|
||||||
|
it under the terms of the GNU General Public License as published by
|
||||||
|
the Free Software Foundation, either version 3 of the License, or
|
||||||
|
(at your option) any later version.
|
||||||
|
|
||||||
|
This program is distributed in the hope that it will be useful,
|
||||||
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
GNU General Public License for more details.
|
||||||
|
|
||||||
|
You should have received a copy of the GNU General Public License
|
||||||
|
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
Also add information on how to contact you by electronic and paper mail.
|
||||||
|
|
||||||
|
If the program does terminal interaction, make it output a short notice like this
|
||||||
|
when it starts in an interactive mode:
|
||||||
|
|
||||||
|
<program> Copyright (C) <year> <name of author>
|
||||||
|
This program comes with ABSOLUTELY NO WARRANTY; for details type 'show w'.
|
||||||
|
This is free software, and you are welcome to redistribute it
|
||||||
|
under certain conditions; type 'show c' for details.
|
||||||
|
|
||||||
|
The hypothetical commands `show w` and `show c` should show the appropriate parts of
|
||||||
|
the General Public License. Of course, your program's commands might be different;
|
||||||
|
for a GUI interface, you would use an “about box”.
|
||||||
|
|
||||||
|
You should also get your employer (if you work as a programmer) or school, if any, to
|
||||||
|
sign a “copyright disclaimer” for the program, if necessary. For more
|
||||||
|
information on this, and how to apply and follow the GNU GPL, see
|
||||||
|
<<http://www.gnu.org/licenses/>>.
|
||||||
|
|
||||||
|
The GNU General Public License does not permit incorporating your program into
|
||||||
|
proprietary programs. If your program is a subroutine library, you may consider it
|
||||||
|
more useful to permit linking proprietary applications with the library. If this is
|
||||||
|
what you want to do, use the GNU Lesser General Public License instead of this
|
||||||
|
License. But first, please read
|
||||||
|
<<http://www.gnu.org/philosophy/why-not-lgpl.html>>.
|
@ -0,0 +1,48 @@
|
|||||||
|
# FFX
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
per https:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pip install https://<URL>/<Releaser>/ffx.git@<Branch>
|
||||||
|
```
|
||||||
|
|
||||||
|
per git:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pip install git+ssh://<Username>@<URL>/<Releaser>/ffx.git@<Branch>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Version history
|
||||||
|
|
||||||
|
### 0.1.1
|
||||||
|
|
||||||
|
Bugfixes, TMBD identify shows
|
||||||
|
|
||||||
|
### 0.1.2
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
|
||||||
|
### 0.1.3
|
||||||
|
|
||||||
|
Subtitle file imports
|
||||||
|
|
||||||
|
### 0.2.0
|
||||||
|
|
||||||
|
Tests, Config-File
|
||||||
|
|
||||||
|
### 0.2.1
|
||||||
|
|
||||||
|
Signature, Tags cleaning, Bugfixes, Refactoring
|
||||||
|
|
||||||
|
### 0.2.2
|
||||||
|
|
||||||
|
CLI-Overrides
|
||||||
|
|
||||||
|
### 0.2.3
|
||||||
|
|
||||||
|
PyPi packaging
|
||||||
|
Templating output filename
|
||||||
|
Season shiftung
|
||||||
|
DB-Versionierung
|
@ -1,32 +0,0 @@
|
|||||||
import os
|
|
||||||
|
|
||||||
from ffx.pattern_controller import PatternController
|
|
||||||
|
|
||||||
from ffx.model.show import Base
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey
|
|
||||||
from sqlalchemy.orm import relationship, sessionmaker, Mapped, backref
|
|
||||||
|
|
||||||
filename = 'Boruto.Naruto.Next.Generations.S01E256.GerEngSub.AAC.1080p.WebDL.x264-Tanuki.mkv'
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Data 'input' variable
|
|
||||||
context = {}
|
|
||||||
|
|
||||||
# Initialize DB
|
|
||||||
homeDir = os.path.expanduser("~")
|
|
||||||
ffxVarDir = os.path.join(homeDir, '.local', 'var', 'ffx')
|
|
||||||
if not os.path.exists(ffxVarDir):
|
|
||||||
os.makedirs(ffxVarDir)
|
|
||||||
|
|
||||||
context['database_url'] = f"sqlite:///{os.path.join(ffxVarDir, 'ffx.db')}"
|
|
||||||
context['database_engine'] = create_engine(context['database_url'])
|
|
||||||
context['database_session'] = sessionmaker(bind=context['database_engine'])
|
|
||||||
|
|
||||||
Base.metadata.create_all(context['database_engine'])
|
|
||||||
|
|
||||||
|
|
||||||
pc = PatternController(context)
|
|
||||||
|
|
||||||
|
|
||||||
print(pc.matchFilename(filename))
|
|
@ -1,554 +0,0 @@
|
|||||||
#! /usr/bin/python3
|
|
||||||
|
|
||||||
import os, sys, subprocess, json, click, time, re
|
|
||||||
|
|
||||||
from textual.app import App, ComposeResult
|
|
||||||
from textual.screen import Screen
|
|
||||||
from textual.widgets import Header, Footer, Placeholder, Label
|
|
||||||
|
|
||||||
|
|
||||||
VERSION='0.1.0'
|
|
||||||
|
|
||||||
DEFAULT_VIDEO_ENCODER = 'vp9'
|
|
||||||
|
|
||||||
DEFAULT_QUALITY = 23
|
|
||||||
|
|
||||||
DEFAULT_AV1_PRESET = 5
|
|
||||||
|
|
||||||
DEFAULT_LABEL='output'
|
|
||||||
DEFAULT_FILE_SUFFIX = 'webm'
|
|
||||||
|
|
||||||
DEFAULT_STEREO_BANDWIDTH = "128"
|
|
||||||
DEFAULT_AC3_BANDWIDTH = "256"
|
|
||||||
DEFAULT_DTS_BANDWIDTH = "320"
|
|
||||||
|
|
||||||
DEFAULT_CROP_START = 60
|
|
||||||
DEFAULT_CROP_LENGTH = 180
|
|
||||||
|
|
||||||
TEMP_FILE_NAME = "ffmpeg2pass-0.log"
|
|
||||||
|
|
||||||
|
|
||||||
MKVMERGE_METADATA_KEYS = ['BPS',
|
|
||||||
'NUMBER_OF_FRAMES',
|
|
||||||
'NUMBER_OF_BYTES',
|
|
||||||
'_STATISTICS_WRITING_APP',
|
|
||||||
'_STATISTICS_WRITING_DATE_UTC',
|
|
||||||
'_STATISTICS_TAGS']
|
|
||||||
|
|
||||||
FILE_EXTENSIONS = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
|
||||||
|
|
||||||
|
|
||||||
COMMAND_TOKENS = ['ffmpeg', '-y', '-i']
|
|
||||||
NULL_TOKENS = ['-f', 'null', '/dev/null']
|
|
||||||
|
|
||||||
STREAM_TYPE_VIDEO = 'video'
|
|
||||||
STREAM_TYPE_AUDIO = 'audio'
|
|
||||||
STREAM_TYPE_SUBTITLE = 'subtitle'
|
|
||||||
|
|
||||||
STREAM_LAYOUT_6_1 = '6.1'
|
|
||||||
STREAM_LAYOUT_5_1 = '5.1(side)'
|
|
||||||
STREAM_LAYOUT_STEREO = 'stereo'
|
|
||||||
STREAM_LAYOUT_6CH = '6ch'
|
|
||||||
|
|
||||||
SEASON_EPISODE_INDICATOR_MATCH = '([sS][0-9]+)([eE][0-9]+)'
|
|
||||||
SEASON_INDICATOR_MATCH = '([sS][0-9]+)'
|
|
||||||
EPISODE_INDICATOR_MATCH = '([eE][0-9]+)'
|
|
||||||
|
|
||||||
|
|
||||||
class DashboardScreen(Screen):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
context = self.app.getContext()
|
|
||||||
context['dashboard'] = 'dashboard'
|
|
||||||
|
|
||||||
def compose(self) -> ComposeResult:
|
|
||||||
yield Header(show_clock=True)
|
|
||||||
yield Placeholder("Dashboard Screen")
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
class WarningScreen(Screen):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
context = self.app.getContext()
|
|
||||||
def compose(self) -> ComposeResult:
|
|
||||||
yield Label("Warning! This file is not compliant to the defined source schema!")
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
class SettingsScreen(Screen):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
context = self.app.getContext()
|
|
||||||
def compose(self) -> ComposeResult:
|
|
||||||
yield Placeholder("Settings Screen")
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
class HelpScreen(Screen):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
context = self.app.getContext()
|
|
||||||
def compose(self) -> ComposeResult:
|
|
||||||
yield Placeholder("Help Screen")
|
|
||||||
yield Footer()
|
|
||||||
|
|
||||||
|
|
||||||
class ModesApp(App):
|
|
||||||
|
|
||||||
BINDINGS = [
|
|
||||||
("q", "quit()", "Quit"),
|
|
||||||
# ("d", "switch_mode('dashboard')", "Dashboard"),
|
|
||||||
# ("s", "switch_mode('settings')", "Settings"),
|
|
||||||
# ("h", "switch_mode('help')", "Help"),
|
|
||||||
]
|
|
||||||
|
|
||||||
MODES = {
|
|
||||||
"warning": WarningScreen,
|
|
||||||
"dashboard": DashboardScreen,
|
|
||||||
"settings": SettingsScreen,
|
|
||||||
"help": HelpScreen,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = {}):
|
|
||||||
super().__init__()
|
|
||||||
self.context = context
|
|
||||||
|
|
||||||
def on_mount(self) -> None:
|
|
||||||
self.switch_mode("warning")
|
|
||||||
|
|
||||||
def getContext(self):
|
|
||||||
return self.context
|
|
||||||
|
|
||||||
|
|
||||||
def executeProcess(commandSequence):
|
|
||||||
|
|
||||||
process = subprocess.Popen(commandSequence, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
|
||||||
|
|
||||||
output, error = process.communicate()
|
|
||||||
|
|
||||||
return output
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#[{'index': 0, 'codec_name': 'vp9', 'codec_long_name': 'Google VP9', 'profile': 'Profile 0', 'codec_type': 'video', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'width': 1920, 'height': 1080, 'coded_width': 1920, 'coded_height': 1080, 'closed_captions': 0, 'film_grain': 0, 'has_b_frames': 0, 'sample_aspect_ratio': '1:1', 'display_aspect_ratio': '16:9', 'pix_fmt': 'yuv420p', 'level': -99, 'color_range': 'tv', 'chroma_location': 'left', 'field_order': 'progressive', 'refs': 1, 'r_frame_rate': '24000/1001', 'avg_frame_rate': '24000/1001', 'time_base': '1/1000', 'start_pts': 0, 'start_time': '0.000000', 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'BPS': '7974017', 'NUMBER_OF_FRAMES': '34382', 'NUMBER_OF_BYTES': '1429358655', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 libvpx-vp9', 'DURATION': '00:23:54.016000000'}}]
|
|
||||||
#[{'index': 1, 'codec_name': 'opus', 'codec_long_name': 'Opus (Opus Interactive Audio Codec)', 'codec_type': 'audio', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'sample_fmt': 'fltp', 'sample_rate': '48000', 'channels': 2, 'channel_layout': 'stereo', 'bits_per_sample': 0, 'initial_padding': 312, 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'extradata_size': 19, 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'jpn', 'title': 'Japanisch', 'BPS': '128000', 'NUMBER_OF_FRAMES': '61763', 'NUMBER_OF_BYTES': '22946145', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 libopus', 'DURATION': '00:23:54.141000000'}}]
|
|
||||||
|
|
||||||
#[{'index': 2, 'codec_name': 'webvtt', 'codec_long_name': 'WebVTT subtitle', 'codec_type': 'subtitle', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'duration_ts': 1434141, 'duration': '1434.141000', 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'ger', 'title': 'Deutsch [Full]', 'BPS': '118', 'NUMBER_OF_FRAMES': '300', 'NUMBER_OF_BYTES': '21128', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 webvtt', 'DURATION': '00:23:54.010000000'}}, {'index': 3, 'codec_name': 'webvtt', 'codec_long_name': 'WebVTT subtitle', 'codec_type': 'subtitle', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'duration_ts': 1434141, 'duration': '1434.141000', 'disposition': {'default': 0, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'eng', 'title': 'Englisch [Full]', 'BPS': '101', 'NUMBER_OF_FRAMES': '276', 'NUMBER_OF_BYTES': '16980', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 webvtt', 'DURATION': '00:23:53.230000000'}}]
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def getStreamDescriptor(filename):
|
|
||||||
|
|
||||||
ffprobeOutput = executeProcess(["ffprobe",
|
|
||||||
"-show_streams",
|
|
||||||
"-of", "json",
|
|
||||||
filename])
|
|
||||||
|
|
||||||
streamData = json.loads(ffprobeOutput)['streams']
|
|
||||||
|
|
||||||
descriptor = []
|
|
||||||
|
|
||||||
i = 0
|
|
||||||
for d in [s for s in streamData if s['codec_type'] == STREAM_TYPE_VIDEO]:
|
|
||||||
descriptor.append({
|
|
||||||
'index': d['index'],
|
|
||||||
'sub_index': i,
|
|
||||||
'type': STREAM_TYPE_VIDEO,
|
|
||||||
'codec': d['codec_name']
|
|
||||||
})
|
|
||||||
i += 1
|
|
||||||
|
|
||||||
i = 0
|
|
||||||
for d in [s for s in streamData if s['codec_type'] == STREAM_TYPE_AUDIO]:
|
|
||||||
|
|
||||||
streamDescriptor = {
|
|
||||||
'index': d['index'],
|
|
||||||
'sub_index': i,
|
|
||||||
'type': STREAM_TYPE_AUDIO,
|
|
||||||
'codec': d['codec_name'],
|
|
||||||
'channels': d['channels']
|
|
||||||
}
|
|
||||||
|
|
||||||
if 'channel_layout' in d.keys():
|
|
||||||
streamDescriptor['layout'] = d['channel_layout']
|
|
||||||
elif d['channels'] == 6:
|
|
||||||
streamDescriptor['layout'] = STREAM_LAYOUT_6CH
|
|
||||||
else:
|
|
||||||
streamDescriptor['layout'] = 'undefined'
|
|
||||||
|
|
||||||
descriptor.append(streamDescriptor)
|
|
||||||
i += 1
|
|
||||||
|
|
||||||
i = 0
|
|
||||||
for d in [s for s in streamData if s['codec_type'] == STREAM_TYPE_SUBTITLE]:
|
|
||||||
descriptor.append({
|
|
||||||
'index': d['index'],
|
|
||||||
'sub_index': i,
|
|
||||||
'type': STREAM_TYPE_SUBTITLE,
|
|
||||||
'codec': d['codec_name']
|
|
||||||
})
|
|
||||||
i += 1
|
|
||||||
|
|
||||||
return descriptor
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def generateAV1Tokens(q, p):
|
|
||||||
|
|
||||||
return ['-c:v:0', 'libsvtav1',
|
|
||||||
'-svtav1-params', f"crf={q}:preset={p}:tune=0:enable-overlays=1:scd=1:scm=0",
|
|
||||||
'-pix_fmt', 'yuv420p10le']
|
|
||||||
|
|
||||||
def generateVP9Pass1Tokens(q):
|
|
||||||
|
|
||||||
return ['-c:v:0', 'libvpx-vp9',
|
|
||||||
'-row-mt', '1',
|
|
||||||
'-crf', str(q),
|
|
||||||
'-pass', '1',
|
|
||||||
'-speed', '4',
|
|
||||||
'-frame-parallel', '0',
|
|
||||||
'-g', '9999',
|
|
||||||
'-aq-mode', '0']
|
|
||||||
|
|
||||||
def generateVP9Pass2Tokens(q):
|
|
||||||
|
|
||||||
return ['-c:v:0', 'libvpx-vp9',
|
|
||||||
'-row-mt', '1',
|
|
||||||
'-crf', str(q),
|
|
||||||
'-pass', '2',
|
|
||||||
'-frame-parallel', '0',
|
|
||||||
'-g', '9999',
|
|
||||||
'-aq-mode', '0',
|
|
||||||
'-auto-alt-ref', '1',
|
|
||||||
'-lag-in-frames', '25']
|
|
||||||
|
|
||||||
|
|
||||||
def generateCropTokens(start, length):
|
|
||||||
|
|
||||||
return ['-ss', str(start), '-t', str(length)]
|
|
||||||
|
|
||||||
|
|
||||||
def generateDenoiseTokens(spatial=5, patch=7, research=7, hw=False):
|
|
||||||
filterName = 'nlmeans_opencl' if hw else 'nlmeans'
|
|
||||||
return ['-vf', f"{filterName}=s={spatial}:p={patch}:r={research}"]
|
|
||||||
|
|
||||||
|
|
||||||
def generateOutputTokens(f, suffix, q=None):
|
|
||||||
|
|
||||||
if q is None:
|
|
||||||
return ['-f', 'webm', f"{f}.{suffix}"]
|
|
||||||
else:
|
|
||||||
return ['-f', 'webm', f"{f}_q{q}.{suffix}"]
|
|
||||||
|
|
||||||
|
|
||||||
# preset = DEFAULT_AV1_PRESET
|
|
||||||
# presetTokens = [p for p in sys.argv if p.startswith('p=')]
|
|
||||||
# if presetTokens:
|
|
||||||
# preset = int(presetTokens[0].split('=')[1])
|
|
||||||
|
|
||||||
# cropStart = ''
|
|
||||||
# cropLength = ''
|
|
||||||
# cropTokens = [c for c in sys.argv if c.startswith('crop')]
|
|
||||||
# if cropTokens:
|
|
||||||
# if '=' in cropTokens[0]:
|
|
||||||
# cropString = cropTokens[0].split('=')[1]
|
|
||||||
# cropStart, cropLength = cropString.split(',')
|
|
||||||
# else:
|
|
||||||
# cropStart = 60
|
|
||||||
# cropLength = 180
|
|
||||||
#
|
|
||||||
# denoiseTokens = [d for d in sys.argv if d.startswith('denoise')]
|
|
||||||
#
|
|
||||||
|
|
||||||
# for aStream in audioStreams:
|
|
||||||
# if 'channel_layout' in aStream:
|
|
||||||
# print(f"audio stream: {aStream['channel_layout']}") #channel_layout
|
|
||||||
# else:
|
|
||||||
# print(f"unknown audio stream with {aStream['channels']} channels") #channel_layout
|
|
||||||
|
|
||||||
def generateAudioTokens(context, index, layout):
|
|
||||||
|
|
||||||
if layout == STREAM_LAYOUT_6_1:
|
|
||||||
return [f"-c:a:{index}",
|
|
||||||
'libopus',
|
|
||||||
f"-filter:a:{index}",
|
|
||||||
'channelmap=channel_layout=6.1',
|
|
||||||
f"-b:a:{index}",
|
|
||||||
context['bitrates']['dts']]
|
|
||||||
|
|
||||||
elif layout == STREAM_LAYOUT_5_1:
|
|
||||||
return [f"-c:a:{index}",
|
|
||||||
'libopus',
|
|
||||||
f"-filter:a:{index}",
|
|
||||||
"channelmap=FL-FL|FR-FR|FC-FC|LFE-LFE|SL-BL|SR-BR:5.1",
|
|
||||||
f"-b:a:{index}",
|
|
||||||
context['bitrates']['ac3']]
|
|
||||||
|
|
||||||
elif layout == STREAM_LAYOUT_STEREO:
|
|
||||||
return [f"-c:a:{index}",
|
|
||||||
'libopus',
|
|
||||||
f"-b:a:{index}",
|
|
||||||
context['bitrates']['stereo']]
|
|
||||||
|
|
||||||
elif layout == STREAM_LAYOUT_6CH:
|
|
||||||
return [f"-c:a:{index}",
|
|
||||||
'libopus',
|
|
||||||
f"-filter:a:{index}",
|
|
||||||
"channelmap=FL-FL|FR-FR|FC-FC|LFE-LFE|SL-BL|SR-BR:5.1",
|
|
||||||
f"-b:a:{index}",
|
|
||||||
context['bitrates']['ac3']]
|
|
||||||
else:
|
|
||||||
return []
|
|
||||||
|
|
||||||
|
|
||||||
def generateClearTokens(streams):
|
|
||||||
clearTokens = []
|
|
||||||
for s in streams:
|
|
||||||
for k in MKVMERGE_METADATA_KEYS:
|
|
||||||
clearTokens += [f"-metadata:s:{s['type'][0]}:{s['sub_index']}", f"{k}="]
|
|
||||||
return clearTokens
|
|
||||||
|
|
||||||
|
|
||||||
@click.group()
|
|
||||||
@click.pass_context
|
|
||||||
def ffx(ctx):
|
|
||||||
"""FFX"""
|
|
||||||
ctx.obj = {}
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
# Define a subcommand
|
|
||||||
@ffx.command()
|
|
||||||
def version():
|
|
||||||
click.echo(VERSION)
|
|
||||||
|
|
||||||
|
|
||||||
# Another subcommand
|
|
||||||
@ffx.command()
|
|
||||||
def help():
|
|
||||||
click.echo(f"ffx {VERSION}\n")
|
|
||||||
click.echo(f"Usage: ffx [input file] [output file] [vp9|av1] [q=[nn[,nn,...]]] [p=nn] [a=nnn[k]] [ac3=nnn[k]] [dts=nnn[k]] [crop]")
|
|
||||||
|
|
||||||
|
|
||||||
@click.argument('filename', nargs=1)
|
|
||||||
@ffx.command()
|
|
||||||
def streams(filename):
|
|
||||||
for d in getStreamDescriptor(filename):
|
|
||||||
click.echo(f"{d['codec']}{' (' + str(d['channels']) + ')' if d['type'] == 'audio' else ''}")
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
|
|
||||||
@click.argument('paths', nargs=-1)
|
|
||||||
@click.option('-l', '--label', type=str, default=DEFAULT_LABEL, help='Label to be used as filename prefix')
|
|
||||||
|
|
||||||
@click.option('-v', '--video-encoder', type=str, default=DEFAULT_VIDEO_ENCODER, help='Target video encoder (vp9 or av1) default: vp9')
|
|
||||||
|
|
||||||
@click.option('-q', '--quality', type=str, default=DEFAULT_QUALITY, help='Quality settings to be used with VP9 encoder (default: 23)')
|
|
||||||
@click.option('-p', '--preset', type=str, default=DEFAULT_QUALITY, help='Quality preset to be used with AV1 encoder (default: 5)')
|
|
||||||
|
|
||||||
@click.option('-a', '--stereo-bitrate', type=int, default=DEFAULT_STEREO_BANDWIDTH, help='Bitrate in kbit/s to be used to encode stereo audio streams')
|
|
||||||
@click.option('-ac3', '--ac3-bitrate', type=int, default=DEFAULT_AC3_BANDWIDTH, help='Bitrate in kbit/s to be used to encode 5.1 audio streams')
|
|
||||||
@click.option('-dts', '--dts-bitrate', type=int, default=DEFAULT_DTS_BANDWIDTH, help='Bitrate in kbit/s to be used to encode 6.1 audio streams')
|
|
||||||
|
|
||||||
@click.option('-ds', '--default-subtitle', type=int, help='Index of default subtitle stream')
|
|
||||||
|
|
||||||
@click.option('-fa', '--forced-audio', type=int, help='Index of forced audio stream (including default audio stream tag)')
|
|
||||||
@click.option('-da', '--default-audio', type=int, help='Index of default audio stream')
|
|
||||||
|
|
||||||
|
|
||||||
@click.option("--crop", is_flag=False, flag_value="default", default="none")
|
|
||||||
|
|
||||||
@click.option("-c", "--clear-metadata", is_flag=True, default=False)
|
|
||||||
@click.option("-d", "--denoise", is_flag=True, default=False)
|
|
||||||
|
|
||||||
@click.option("-o", "--output-directory", type=str, default='')
|
|
||||||
|
|
||||||
|
|
||||||
def convert(ctx, paths, label, video_encoder, quality, preset, stereo_bitrate, ac3_bitrate, dts_bitrate, crop, clear_metadata, default_subtitle, forced_audio, default_audio, denoise, output_directory):
|
|
||||||
"""Batch conversion of audiovideo files in format suitable for web playback, e.g. jellyfin
|
|
||||||
|
|
||||||
Files found under PATHS will be converted according to parameters.
|
|
||||||
Filename extensions will be changed appropriately.
|
|
||||||
Suffices will we appended to filename in case of multiple created files
|
|
||||||
or if the filename has not changed."""
|
|
||||||
|
|
||||||
startTime = time.perf_counter()
|
|
||||||
|
|
||||||
context = ctx.obj
|
|
||||||
|
|
||||||
|
|
||||||
click.echo(f"\nVideo encoder: {video_encoder}")
|
|
||||||
|
|
||||||
qualityTokens = quality.split(',')
|
|
||||||
q_list = [q for q in qualityTokens if q.isnumeric()]
|
|
||||||
|
|
||||||
click.echo(f"Qualities: {q_list}")
|
|
||||||
|
|
||||||
|
|
||||||
ctx.obj['bitrates'] = {}
|
|
||||||
ctx.obj['bitrates']['stereo'] = str(stereo_bitrate) if str(stereo_bitrate).endswith('k') else f"{stereo_bitrate}k"
|
|
||||||
ctx.obj['bitrates']['ac3'] = str(ac3_bitrate) if str(ac3_bitrate).endswith('k') else f"{ac3_bitrate}k"
|
|
||||||
ctx.obj['bitrates']['dts'] = str(dts_bitrate) if str(dts_bitrate).endswith('k') else f"{dts_bitrate}k"
|
|
||||||
|
|
||||||
click.echo(f"Stereo bitrate: {ctx.obj['bitrates']['stereo']}")
|
|
||||||
click.echo(f"AC3 bitrate: {ctx.obj['bitrates']['ac3']}")
|
|
||||||
click.echo(f"DTS bitrate: {ctx.obj['bitrates']['dts']}")
|
|
||||||
|
|
||||||
ctx.obj['perform_crop'] = (crop != 'none')
|
|
||||||
|
|
||||||
if ctx.obj['perform_crop']:
|
|
||||||
|
|
||||||
cropTokens = crop.split(',')
|
|
||||||
|
|
||||||
if cropTokens and len(cropTokens) == 2:
|
|
||||||
|
|
||||||
ctx.obj['crop_start'], ctx.obj['crop_length'] = crop.split(',')
|
|
||||||
else:
|
|
||||||
ctx.obj['crop_start'] = DEFAULT_CROP_START
|
|
||||||
ctx.obj['crop_length'] = DEFAULT_CROP_LENGTH
|
|
||||||
|
|
||||||
click.echo(f"crop start={ctx.obj['crop_start']} length={ctx.obj['crop_length']}")
|
|
||||||
|
|
||||||
|
|
||||||
click.echo(f"\nRunning {len(paths) * len(q_list)} jobs")
|
|
||||||
|
|
||||||
|
|
||||||
se_match = re.compile(SEASON_EPISODE_INDICATOR_MATCH)
|
|
||||||
s_match = re.compile(SEASON_INDICATOR_MATCH)
|
|
||||||
e_match = re.compile(EPISODE_INDICATOR_MATCH)
|
|
||||||
|
|
||||||
|
|
||||||
for sourcePath in paths:
|
|
||||||
|
|
||||||
|
|
||||||
if not os.path.isfile(sourcePath):
|
|
||||||
click.echo(f"There is no file with path {sourcePath}, skipping ...")
|
|
||||||
continue
|
|
||||||
|
|
||||||
sourceDirectory = os.path.dirname(sourcePath)
|
|
||||||
sourceFilename = os.path.basename(sourcePath)
|
|
||||||
sourcePathTokens = sourceFilename.split('.')
|
|
||||||
|
|
||||||
if sourcePathTokens[-1] in FILE_EXTENSIONS:
|
|
||||||
sourceFileBasename = '.'.join(sourcePathTokens[:-1])
|
|
||||||
sourceFilenameExtension = sourcePathTokens[-1]
|
|
||||||
else:
|
|
||||||
sourceFileBasename = sourceFilename
|
|
||||||
sourceFilenameExtension = ''
|
|
||||||
|
|
||||||
#click.echo(f"dir={sourceDirectory} base={sourceFileBasename} ext={sourceFilenameExtension}")
|
|
||||||
|
|
||||||
click.echo(f"\nProcessing file {sourcePath}")
|
|
||||||
|
|
||||||
|
|
||||||
se_result = se_match.search(sourceFilename)
|
|
||||||
s_result = s_match.search(sourceFilename)
|
|
||||||
e_result = e_match.search(sourceFilename)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#streamDescriptor = getStreamDescriptor(sourcePath)
|
|
||||||
|
|
||||||
#commandTokens = COMMAND_TOKENS + [sourcePath]
|
|
||||||
|
|
||||||
|
|
||||||
#for q in q_list:
|
|
||||||
|
|
||||||
#click.echo(f"\nRunning job q={q}")
|
|
||||||
|
|
||||||
#mappingVideoTokens = ['-map', 'v:0']
|
|
||||||
#mappingTokens = mappingVideoTokens.copy()
|
|
||||||
#audioTokens = []
|
|
||||||
|
|
||||||
#audioIndex = 0
|
|
||||||
#for audioStreamDescriptor in streamDescriptor:
|
|
||||||
|
|
||||||
#if audioStreamDescriptor['type'] == STREAM_TYPE_AUDIO:
|
|
||||||
|
|
||||||
#mappingTokens += ['-map', f"a:{audioIndex}"]
|
|
||||||
#audioTokens += generateAudioTokens(ctx.obj, audioIndex, audioStreamDescriptor['layout'])
|
|
||||||
#audioIndex += 1
|
|
||||||
|
|
||||||
|
|
||||||
#for s in range(len([d for d in streamDescriptor if d['type'] == STREAM_TYPE_SUBTITLE])):
|
|
||||||
#mappingTokens += ['-map', f"s:{s}"]
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#if video_encoder == 'av1':
|
|
||||||
|
|
||||||
#commandSequence = commandTokens + mappingTokens + audioTokens + generateAV1Tokens(q, preset) + audioTokens
|
|
||||||
|
|
||||||
#if clear_metadata:
|
|
||||||
#commandSequence += generateClearTokens(streamDescriptor)
|
|
||||||
|
|
||||||
#if performCrop:
|
|
||||||
#commandSequence += generateCropTokens(cropStart, cropLength)
|
|
||||||
|
|
||||||
#commandSequence += generateOutputTokens(targetFilename, DEFAULT_FILE_SUFFIX, q)
|
|
||||||
|
|
||||||
#click.echo(f"Command: {' '.join(commandSequence)}")
|
|
||||||
|
|
||||||
#executeProcess(commandSequence)
|
|
||||||
|
|
||||||
|
|
||||||
#if video_encoder == 'vp9':
|
|
||||||
|
|
||||||
#commandSequence1 = commandTokens + mappingVideoTokens + generateVP9Pass1Tokens(q)
|
|
||||||
|
|
||||||
#if performCrop:
|
|
||||||
# commandSequence1 += generateCropTokens(cropStart, cropLength)
|
|
||||||
|
|
||||||
#commandSequence1 += NULL_TOKENS
|
|
||||||
|
|
||||||
#click.echo(f"Command 1: {' '.join(commandSequence1)}")
|
|
||||||
|
|
||||||
#if os.path.exists(TEMP_FILE_NAME):
|
|
||||||
# os.remove(TEMP_FILE_NAME)
|
|
||||||
|
|
||||||
#executeProcess(commandSequence1)
|
|
||||||
|
|
||||||
|
|
||||||
#commandSequence2 = commandTokens + mappingTokens
|
|
||||||
|
|
||||||
#if denoise:
|
|
||||||
# commandSequence2 += generateDenoiseTokens()
|
|
||||||
|
|
||||||
#commandSequence2 += generateVP9Pass2Tokens(q) + audioTokens
|
|
||||||
|
|
||||||
#if clear_metadata:
|
|
||||||
# commandSequence2 += generateClearTokens(streamDescriptor)
|
|
||||||
|
|
||||||
#if performCrop:
|
|
||||||
# commandSequence2 += generateCropTokens(cropStart, cropLength)
|
|
||||||
|
|
||||||
#commandSequence2 += generateOutputTokens(targetFilename, DEFAULT_FILE_SUFFIX, q)
|
|
||||||
|
|
||||||
#click.echo(f"Command 2: {' '.join(commandSequence2)}")
|
|
||||||
|
|
||||||
#executeProcess(commandSequence2)
|
|
||||||
|
|
||||||
|
|
||||||
#app = ModesApp(ctx.obj)
|
|
||||||
#app.run()
|
|
||||||
|
|
||||||
#click.confirm('Warning! This file is not compliant to the defined source schema! Do you want to continue?', abort=True)
|
|
||||||
|
|
||||||
click.echo('\nDONE\n')
|
|
||||||
|
|
||||||
endTime = time.perf_counter()
|
|
||||||
click.echo(f"Time elapsed {endTime - startTime}")
|
|
||||||
|
|
||||||
|
|
||||||
# click.echo(f"app result: {app.getContext()}")
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
ffx()
|
|
@ -1,537 +0,0 @@
|
|||||||
#! /usr/bin/python3
|
|
||||||
|
|
||||||
import os, click, time, logging
|
|
||||||
|
|
||||||
from ffx.configuration_controller import ConfigurationController
|
|
||||||
|
|
||||||
from ffx.file_properties import FileProperties
|
|
||||||
|
|
||||||
from ffx.ffx_app import FfxApp
|
|
||||||
from ffx.ffx_controller import FfxController
|
|
||||||
from ffx.tmdb_controller import TmdbController
|
|
||||||
|
|
||||||
from ffx.database import databaseContext
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
from ffx.video_encoder import VideoEncoder
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
|
|
||||||
from ffx.process import executeProcess
|
|
||||||
|
|
||||||
|
|
||||||
VERSION='0.2.0'
|
|
||||||
|
|
||||||
# 0.1.1
|
|
||||||
# Bugfixes, TMBD identify shows
|
|
||||||
# 0.1.2
|
|
||||||
# Bugfixes
|
|
||||||
# 0.1.3
|
|
||||||
# Subtitle file imports
|
|
||||||
# 0.2.0
|
|
||||||
# Tests, Config-File
|
|
||||||
|
|
||||||
@click.group()
|
|
||||||
@click.pass_context
|
|
||||||
@click.option('--database-file', type=str, default='', help='Path to database file')
|
|
||||||
@click.option('-v', '--verbose', type=int, default=0, help='Set verbosity of output')
|
|
||||||
@click.option("--dry-run", is_flag=True, default=False)
|
|
||||||
def ffx(ctx, database_file, verbose, dry_run):
|
|
||||||
"""FFX"""
|
|
||||||
|
|
||||||
ctx.obj = {}
|
|
||||||
|
|
||||||
ctx.obj['config'] = ConfigurationController()
|
|
||||||
|
|
||||||
ctx.obj['database'] = databaseContext(databasePath=database_file
|
|
||||||
if database_file else ctx.obj['config'].getDatabaseFilePath())
|
|
||||||
|
|
||||||
ctx.obj['dry_run'] = dry_run
|
|
||||||
ctx.obj['verbosity'] = verbose
|
|
||||||
|
|
||||||
# Critical 50
|
|
||||||
# Error 40
|
|
||||||
# Warning 30
|
|
||||||
# Info 20
|
|
||||||
# Debug 10
|
|
||||||
fileLogVerbosity = max(40 - verbose * 10, 10)
|
|
||||||
consoleLogVerbosity = max(20 - verbose * 10, 10)
|
|
||||||
|
|
||||||
ctx.obj['logger'] = logging.getLogger('FFX')
|
|
||||||
ctx.obj['logger'].setLevel(logging.DEBUG)
|
|
||||||
|
|
||||||
ffxFileHandler = logging.FileHandler(ctx.obj['config'].getLogFilePath())
|
|
||||||
ffxFileHandler.setLevel(fileLogVerbosity)
|
|
||||||
ffxConsoleHandler = logging.StreamHandler()
|
|
||||||
ffxConsoleHandler.setLevel(consoleLogVerbosity)
|
|
||||||
|
|
||||||
fileFormatter = logging.Formatter(
|
|
||||||
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
|
||||||
ffxFileHandler.setFormatter(fileFormatter)
|
|
||||||
consoleFormatter = logging.Formatter(
|
|
||||||
'%(message)s')
|
|
||||||
ffxConsoleHandler.setFormatter(consoleFormatter)
|
|
||||||
|
|
||||||
ctx.obj['logger'].addHandler(ffxConsoleHandler)
|
|
||||||
ctx.obj['logger'].addHandler(ffxFileHandler)
|
|
||||||
|
|
||||||
|
|
||||||
# Define a subcommand
|
|
||||||
@ffx.command()
|
|
||||||
def version():
|
|
||||||
click.echo(VERSION)
|
|
||||||
|
|
||||||
|
|
||||||
# Another subcommand
|
|
||||||
@ffx.command()
|
|
||||||
def help():
|
|
||||||
click.echo(f"ffx {VERSION}\n")
|
|
||||||
click.echo(f"Usage: ffx [input file] [output file] [vp9|av1] [q=[nn[,nn,...]]] [p=nn] [a=nnn[k]] [ac3=nnn[k]] [dts=nnn[k]] [crop]")
|
|
||||||
|
|
||||||
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
@click.argument('filename', nargs=1)
|
|
||||||
def inspect(ctx, filename):
|
|
||||||
|
|
||||||
ctx.obj['command'] = 'inspect'
|
|
||||||
ctx.obj['arguments'] = {}
|
|
||||||
ctx.obj['arguments']['filename'] = filename
|
|
||||||
|
|
||||||
app = FfxApp(ctx.obj)
|
|
||||||
app.run()
|
|
||||||
|
|
||||||
#TODO: TrackCodec Klasse
|
|
||||||
CODEC_LOOKUP_TABLE = {
|
|
||||||
'h264': {'format': 'h264', 'extension': 'h264'},
|
|
||||||
'aac': { 'extension': 'aac'},
|
|
||||||
'ac3': {'format': 'ac3', 'extension': 'ac3'},
|
|
||||||
'ass': {'format': 'ass', 'extension': 'ass'},
|
|
||||||
'hdmv_pgs_subtitle': {'format': 'sup', 'extension': 'sup'}
|
|
||||||
}
|
|
||||||
|
|
||||||
def getUnmuxSequence(trackDescriptor: TrackDescriptor, sourcePath, targetPrefix, targetDirectory = ''):
|
|
||||||
|
|
||||||
trackCodec = trackDescriptor.getCodec()
|
|
||||||
|
|
||||||
if not trackCodec in CODEC_LOOKUP_TABLE.keys():
|
|
||||||
return []
|
|
||||||
|
|
||||||
commandTokens = FfxController.COMMAND_TOKENS + ['-i', sourcePath]
|
|
||||||
trackType = trackDescriptor.getType()
|
|
||||||
|
|
||||||
targetPathBase = os.path.join(targetDirectory, targetPrefix) if targetDirectory else targetPrefix
|
|
||||||
|
|
||||||
commandTokens += ['-map',
|
|
||||||
f"0:{trackType.indicator()}:{trackDescriptor.getSubIndex()}",
|
|
||||||
'-c',
|
|
||||||
'copy']
|
|
||||||
|
|
||||||
if 'format' in CODEC_LOOKUP_TABLE[trackCodec].keys():
|
|
||||||
commandTokens += ['-f', CODEC_LOOKUP_TABLE[trackCodec]['format']]
|
|
||||||
|
|
||||||
commandTokens += [f"{targetPathBase}.{CODEC_LOOKUP_TABLE[trackCodec]['extension']}"]
|
|
||||||
|
|
||||||
return commandTokens
|
|
||||||
|
|
||||||
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
|
|
||||||
@click.argument('paths', nargs=-1)
|
|
||||||
@click.option('-l', '--label', type=str, default='', help='Label to be used as filename prefix')
|
|
||||||
@click.option("-o", "--output-directory", type=str, default='')
|
|
||||||
@click.option("-s", "--subtitles-only", is_flag=True, default=False)
|
|
||||||
def unmux(ctx,
|
|
||||||
paths,
|
|
||||||
label,
|
|
||||||
output_directory,
|
|
||||||
subtitles_only):
|
|
||||||
|
|
||||||
existingSourcePaths = [p for p in paths if os.path.isfile(p)]
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"\nUnmuxing {len(existingSourcePaths)} files")
|
|
||||||
|
|
||||||
for sourcePath in existingSourcePaths:
|
|
||||||
|
|
||||||
fp = FileProperties(ctx.obj, sourcePath)
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
sourceMediaDescriptor = fp.getMediaDescriptor()
|
|
||||||
|
|
||||||
season = fp.getSeason()
|
|
||||||
episode = fp.getEpisode()
|
|
||||||
|
|
||||||
#TODO: Recognition für alle Formate anpassen
|
|
||||||
targetLabel = label if label else fp.getFileBasename()
|
|
||||||
targetIndicator = f"_S{season}E{episode}" if label and season != -1 and episode != -1 else ''
|
|
||||||
|
|
||||||
if label and not targetIndicator:
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"Skipping file {fp.getFilename()}: Label set but no indicator recognized")
|
|
||||||
continue
|
|
||||||
else:
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"\nUnmuxing file {fp.getFilename()}\n")
|
|
||||||
|
|
||||||
for trackDescriptor in sourceMediaDescriptor.getAllTrackDescriptors():
|
|
||||||
|
|
||||||
if trackDescriptor.getType() == TrackType.SUBTITLE or not subtitles_only:
|
|
||||||
|
|
||||||
# SEASON_EPISODE_STREAM_LANGUAGE_MATCH = '[sS]([0-9]+)[eE]([0-9]+)_([0-9]+)_([a-z]{3})'
|
|
||||||
targetPrefix = f"{targetLabel}{targetIndicator}_{trackDescriptor.getIndex()}_{trackDescriptor.getLanguage().threeLetter()}"
|
|
||||||
|
|
||||||
unmuxSequence = getUnmuxSequence(trackDescriptor, sourcePath, targetPrefix, targetDirectory = output_directory)
|
|
||||||
|
|
||||||
if unmuxSequence:
|
|
||||||
if not ctx.obj['dry_run']:
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"Executing unmuxing sequence: {' '.join(unmuxSequence)}")
|
|
||||||
out, err, rc = executeProcess(unmuxSequence)
|
|
||||||
if rc:
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"Unmuxing of stream {trackDescriptor.getIndex()} failed with error ({rc}) {err}")
|
|
||||||
else:
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"Skipping stream with unknown codec {trackDescriptor.getCodec()}")
|
|
||||||
except Exception as ex:
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"Skipping File {sourcePath} ({ex})")
|
|
||||||
|
|
||||||
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
|
|
||||||
def shows(ctx):
|
|
||||||
|
|
||||||
ctx.obj['command'] = 'shows'
|
|
||||||
|
|
||||||
app = FfxApp(ctx.obj)
|
|
||||||
app.run()
|
|
||||||
|
|
||||||
|
|
||||||
def checkUniqueDispositions(context, mediaDescriptor: MediaDescriptor):
|
|
||||||
|
|
||||||
# Check for multiple default or forced dispositions if not set by user input or database requirements
|
|
||||||
#
|
|
||||||
# Query user for the correct sub indices, then configure flags in track descriptors associated with media descriptor accordingly.
|
|
||||||
# The correct tokens should then be created by
|
|
||||||
if len([v for v in mediaDescriptor.getVideoTracks() if v.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one default video stream detected and no prompt set')
|
|
||||||
defaultVideoTrackSubIndex = click.prompt("More than one default video stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setDefaultSubTrack(TrackType.VIDEO, defaultVideoTrackSubIndex)
|
|
||||||
|
|
||||||
if len([v for v in mediaDescriptor.getVideoTracks() if v.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one forced video stream detected and no prompt set')
|
|
||||||
forcedVideoTrackSubIndex = click.prompt("More than one forced video stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setForcedSubTrack(TrackType.VIDEO, forcedVideoTrackSubIndex)
|
|
||||||
|
|
||||||
if len([a for a in mediaDescriptor.getAudioTracks() if a.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one default audio stream detected and no prompt set')
|
|
||||||
defaultAudioTrackSubIndex = click.prompt("More than one default audio stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setDefaultSubTrack(TrackType.AUDIO, defaultAudioTrackSubIndex)
|
|
||||||
|
|
||||||
if len([a for a in mediaDescriptor.getAudioTracks() if a.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one forced audio stream detected and no prompt set')
|
|
||||||
forcedAudioTrackSubIndex = click.prompt("More than one forced audio stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setForcedSubTrack(TrackType.AUDIO, forcedAudioTrackSubIndex)
|
|
||||||
|
|
||||||
if len([s for s in mediaDescriptor.getSubtitleTracks() if s.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one default subtitle stream detected and no prompt set')
|
|
||||||
defaultSubtitleTrackSubIndex = click.prompt("More than one default subtitle stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setDefaultSubTrack(TrackType.SUBTITLE, defaultSubtitleTrackSubIndex)
|
|
||||||
|
|
||||||
if len([s for s in mediaDescriptor.getSubtitleTracks() if s.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
|
||||||
if context['no_prompt']:
|
|
||||||
raise click.ClickException('More than one forced subtitle stream detected and no prompt set')
|
|
||||||
forcedSubtitleTrackSubIndex = click.prompt("More than one forced subtitle stream detected! Please select stream", type=int)
|
|
||||||
mediaDescriptor.setForcedSubTrack(TrackType.SUBTITLE, forcedSubtitleTrackSubIndex)
|
|
||||||
|
|
||||||
|
|
||||||
@ffx.command()
|
|
||||||
@click.pass_context
|
|
||||||
|
|
||||||
@click.argument('paths', nargs=-1)
|
|
||||||
|
|
||||||
@click.option('-l', '--label', type=str, default='', help='Label to be used as filename prefix')
|
|
||||||
|
|
||||||
@click.option('-v', '--video-encoder', type=str, default=FfxController.DEFAULT_VIDEO_ENCODER, help=f"Target video encoder (vp9 or av1) default: {FfxController.DEFAULT_VIDEO_ENCODER}")
|
|
||||||
|
|
||||||
@click.option('-q', '--quality', type=str, default=FfxController.DEFAULT_QUALITY, help=f"Quality settings to be used with VP9 encoder (default: {FfxController.DEFAULT_QUALITY})")
|
|
||||||
@click.option('-p', '--preset', type=str, default=FfxController.DEFAULT_AV1_PRESET, help=f"Quality preset to be used with AV1 encoder (default: {FfxController.DEFAULT_AV1_PRESET})")
|
|
||||||
|
|
||||||
@click.option('-s', '--stereo-bitrate', type=int, default=FfxController.DEFAULT_STEREO_BANDWIDTH, help=f"Bitrate in kbit/s to be used to encode stereo audio streams (default: {FfxController.DEFAULT_STEREO_BANDWIDTH})")
|
|
||||||
@click.option('--ac3', type=int, default=FfxController.DEFAULT_AC3_BANDWIDTH, help=f"Bitrate in kbit/s to be used to encode 5.1 audio streams (default: {FfxController.DEFAULT_AC3_BANDWIDTH})")
|
|
||||||
@click.option('--dts', type=int, default=FfxController.DEFAULT_DTS_BANDWIDTH, help=f"Bitrate in kbit/s to be used to encode 6.1 audio streams (default: {FfxController.DEFAULT_DTS_BANDWIDTH})")
|
|
||||||
|
|
||||||
@click.option('--subtitle-directory', type=str, default='', help='Load subtitles from here')
|
|
||||||
@click.option('--subtitle-prefix', type=str, default='', help='Subtitle filename prefix')
|
|
||||||
|
|
||||||
|
|
||||||
@click.option('--audio-language', type=str, multiple=True, help='Audio stream language(s)')
|
|
||||||
@click.option('--audio-title', type=str, multiple=True, help='Audio stream title(s)')
|
|
||||||
|
|
||||||
@click.option('--default-audio', type=int, default=-1, help='Index of default audio stream')
|
|
||||||
@click.option('--forced-audio', type=int, default=-1, help='Index of forced audio stream')
|
|
||||||
|
|
||||||
|
|
||||||
@click.option('--subtitle-language', type=str, multiple=True, help='Subtitle stream language(s)')
|
|
||||||
@click.option('--subtitle-title', type=str, multiple=True, help='Subtitle stream title(s)')
|
|
||||||
|
|
||||||
@click.option('--default-subtitle', type=int, default=-1, help='Index of default subtitle stream')
|
|
||||||
@click.option('--forced-subtitle', type=int, default=-1, help='Index of forced subtitle stream') # (including default audio stream tag)
|
|
||||||
|
|
||||||
|
|
||||||
@click.option("--crop", is_flag=False, flag_value="default", default="none")
|
|
||||||
|
|
||||||
@click.option("--output-directory", type=str, default='')
|
|
||||||
|
|
||||||
@click.option("--denoise", is_flag=True, default=False)
|
|
||||||
|
|
||||||
@click.option("--no-tmdb", is_flag=True, default=False)
|
|
||||||
@click.option("--no-jellyfin", is_flag=True, default=False)
|
|
||||||
@click.option("--no-pattern", is_flag=True, default=False)
|
|
||||||
@click.option("--dont-pass-dispositions", is_flag=True, default=False)
|
|
||||||
@click.option("--no-prompt", is_flag=True, default=False)
|
|
||||||
|
|
||||||
def convert(ctx,
|
|
||||||
paths,
|
|
||||||
label,
|
|
||||||
video_encoder,
|
|
||||||
quality,
|
|
||||||
preset,
|
|
||||||
stereo_bitrate,
|
|
||||||
ac3,
|
|
||||||
dts,
|
|
||||||
|
|
||||||
subtitle_directory,
|
|
||||||
subtitle_prefix,
|
|
||||||
|
|
||||||
audio_language,
|
|
||||||
audio_title,
|
|
||||||
default_audio,
|
|
||||||
forced_audio,
|
|
||||||
|
|
||||||
subtitle_language,
|
|
||||||
subtitle_title,
|
|
||||||
default_subtitle,
|
|
||||||
forced_subtitle,
|
|
||||||
|
|
||||||
crop,
|
|
||||||
output_directory,
|
|
||||||
denoise,
|
|
||||||
no_tmdb,
|
|
||||||
no_jellyfin,
|
|
||||||
no_pattern,
|
|
||||||
dont_pass_dispositions,
|
|
||||||
no_prompt):
|
|
||||||
"""Batch conversion of audiovideo files in format suitable for web playback, e.g. jellyfin
|
|
||||||
|
|
||||||
Files found under PATHS will be converted according to parameters.
|
|
||||||
Filename extensions will be changed appropriately.
|
|
||||||
Suffices will we appended to filename in case of multiple created files
|
|
||||||
or if the filename has not changed."""
|
|
||||||
|
|
||||||
startTime = time.perf_counter()
|
|
||||||
|
|
||||||
context = ctx.obj
|
|
||||||
|
|
||||||
context['video_encoder'] = VideoEncoder.fromLabel(video_encoder)
|
|
||||||
|
|
||||||
context['use_jellyfin'] = not no_jellyfin
|
|
||||||
context['use_tmdb'] = not no_tmdb
|
|
||||||
context['use_pattern'] = not no_pattern
|
|
||||||
context['no_prompt'] = no_prompt
|
|
||||||
|
|
||||||
context['import_subtitles'] = (subtitle_directory and subtitle_prefix)
|
|
||||||
if context['import_subtitles']:
|
|
||||||
context['subtitle_directory'] = subtitle_directory
|
|
||||||
context['subtitle_prefix'] = subtitle_prefix
|
|
||||||
|
|
||||||
# click.echo(f"\nVideo encoder: {video_encoder}")
|
|
||||||
|
|
||||||
qualityTokens = quality.split(',')
|
|
||||||
q_list = [q for q in qualityTokens if q.isnumeric()]
|
|
||||||
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"Qualities: {q_list}")
|
|
||||||
|
|
||||||
context['bitrates'] = {}
|
|
||||||
context['bitrates']['stereo'] = str(stereo_bitrate) if str(stereo_bitrate).endswith('k') else f"{stereo_bitrate}k"
|
|
||||||
context['bitrates']['ac3'] = str(ac3) if str(ac3).endswith('k') else f"{ac3}k"
|
|
||||||
context['bitrates']['dts'] = str(dts) if str(dts).endswith('k') else f"{dts}k"
|
|
||||||
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"Stereo bitrate: {context['bitrates']['stereo']}")
|
|
||||||
click.echo(f"AC3 bitrate: {context['bitrates']['ac3']}")
|
|
||||||
click.echo(f"DTS bitrate: {context['bitrates']['dts']}")
|
|
||||||
|
|
||||||
|
|
||||||
# Process crop parameters
|
|
||||||
context['perform_crop'] = (crop != 'none')
|
|
||||||
if context['perform_crop']:
|
|
||||||
cTokens = crop.split(',')
|
|
||||||
if cTokens and len(cTokens) == 2:
|
|
||||||
context['crop_start'] = int(cTokens[0])
|
|
||||||
context['crop_length'] = int(cTokens[1])
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"Crop start={context['crop_start']} length={context['crop_length']}")
|
|
||||||
|
|
||||||
|
|
||||||
tc = TmdbController() if context['use_tmdb'] else None
|
|
||||||
|
|
||||||
existingSourcePaths = [p for p in paths if os.path.isfile(p) and p.split('.')[-1] in FfxController.INPUT_FILE_EXTENSIONS]
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"\nRunning {len(existingSourcePaths) * len(q_list)} jobs")
|
|
||||||
jobIndex = 0
|
|
||||||
|
|
||||||
for sourcePath in existingSourcePaths:
|
|
||||||
|
|
||||||
# Separate basedir, basename and extension for current source file
|
|
||||||
sourceDirectory = os.path.dirname(sourcePath)
|
|
||||||
sourceFilename = os.path.basename(sourcePath)
|
|
||||||
sourcePathTokens = sourceFilename.split('.')
|
|
||||||
|
|
||||||
sourceFileBasename = '.'.join(sourcePathTokens[:-1])
|
|
||||||
sourceFilenameExtension = sourcePathTokens[-1]
|
|
||||||
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"\nProcessing file {sourcePath}")
|
|
||||||
|
|
||||||
|
|
||||||
mediaFileProperties = FileProperties(context, sourceFilename)
|
|
||||||
sourceMediaDescriptor = mediaFileProperties.getMediaDescriptor()
|
|
||||||
|
|
||||||
#HINT: This is None if the filename did not match anything in database
|
|
||||||
currentPattern = mediaFileProperties.getPattern() if context['use_pattern'] else None
|
|
||||||
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"Pattern matching: {'No' if currentPattern is None else 'Yes'}")
|
|
||||||
|
|
||||||
# fileBasename = ''
|
|
||||||
|
|
||||||
if currentPattern is None:
|
|
||||||
|
|
||||||
# Case no pattern matching
|
|
||||||
|
|
||||||
# fileBasename = currentShowDescriptor.getFilenamePrefix()
|
|
||||||
|
|
||||||
checkUniqueDispositions(context, sourceMediaDescriptor)
|
|
||||||
|
|
||||||
if context['import_subtitles']:
|
|
||||||
sourceMediaDescriptor.importSubtitles(context['subtitle_directory'],
|
|
||||||
context['subtitle_prefix'],
|
|
||||||
mediaFileProperties.getSeason(),
|
|
||||||
mediaFileProperties.getEpisode())
|
|
||||||
|
|
||||||
if context['use_jellyfin']:
|
|
||||||
# Reorder subtracks in types with default the last, then make subindices flat again
|
|
||||||
sourceMediaDescriptor.applyJellyfinOrder()
|
|
||||||
|
|
||||||
fc = FfxController(context, sourceMediaDescriptor)
|
|
||||||
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
# Case pattern matching
|
|
||||||
|
|
||||||
targetMediaDescriptor = currentPattern.getMediaDescriptor(ctx.obj)
|
|
||||||
|
|
||||||
checkUniqueDispositions(context, targetMediaDescriptor)
|
|
||||||
|
|
||||||
|
|
||||||
currentShowDescriptor = currentPattern.getShowDescriptor(ctx.obj)
|
|
||||||
|
|
||||||
|
|
||||||
if context['use_tmdb']:
|
|
||||||
|
|
||||||
click.echo(f"Querying TMDB for show_id={currentShowDescriptor.getId()} season={mediaFileProperties.getSeason()} episode{mediaFileProperties.getEpisode()}")
|
|
||||||
tmdbEpisodeResult = tc.queryEpisode(currentShowDescriptor.getId(), mediaFileProperties.getSeason(), mediaFileProperties.getEpisode())
|
|
||||||
click.echo(f"tmdbEpisodeResult={tmdbEpisodeResult}")
|
|
||||||
|
|
||||||
if tmdbEpisodeResult:
|
|
||||||
sourceFileBasename = TmdbController.getEpisodeFileBasename(currentShowDescriptor.getFilenamePrefix(),
|
|
||||||
tmdbEpisodeResult['name'],
|
|
||||||
mediaFileProperties.getSeason(),
|
|
||||||
mediaFileProperties.getEpisode(),
|
|
||||||
currentShowDescriptor.getIndexSeasonDigits(),
|
|
||||||
currentShowDescriptor.getIndexEpisodeDigits(),
|
|
||||||
currentShowDescriptor.getIndicatorSeasonDigits(),
|
|
||||||
currentShowDescriptor.getIndicatorEpisodeDigits())
|
|
||||||
else:
|
|
||||||
sourceFileBasename = currentShowDescriptor.getFilenamePrefix()
|
|
||||||
|
|
||||||
if context['import_subtitles']:
|
|
||||||
targetMediaDescriptor.importSubtitles(context['subtitle_directory'],
|
|
||||||
context['subtitle_prefix'],
|
|
||||||
mediaFileProperties.getSeason(),
|
|
||||||
mediaFileProperties.getEpisode())
|
|
||||||
|
|
||||||
# raise click.ClickException(f"tmd subindices: {[t.getSubIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]}")
|
|
||||||
# click.echo(f"tmd subindices: {[t.getIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getSubIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getDispositionFlag(TrackDisposition.DEFAULT) for t in targetMediaDescriptor.getAllTrackDescriptors()]}")
|
|
||||||
|
|
||||||
if context['use_jellyfin']:
|
|
||||||
# Reorder subtracks in types with default the last, then make subindices flat again
|
|
||||||
targetMediaDescriptor.applyJellyfinOrder()
|
|
||||||
# sourceMediaDescriptor.applyJellyfinOrder()
|
|
||||||
|
|
||||||
# click.echo(f"tmd subindices: {[t.getIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getSubIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getDispositionFlag(TrackDisposition.DEFAULT) for t in targetMediaDescriptor.getAllTrackDescriptors()]}")
|
|
||||||
# raise click.Abort
|
|
||||||
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"Input mapping tokens (2nd pass): {targetMediaDescriptor.getInputMappingTokens()}")
|
|
||||||
|
|
||||||
fc = FfxController(context, targetMediaDescriptor, sourceMediaDescriptor)
|
|
||||||
|
|
||||||
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"Season={mediaFileProperties.getSeason()} Episode={mediaFileProperties.getEpisode()}")
|
|
||||||
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"fileBasename={sourceFileBasename}")
|
|
||||||
|
|
||||||
|
|
||||||
for q in q_list:
|
|
||||||
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"\nRunning job {jobIndex} file={sourcePath} q={q}")
|
|
||||||
jobIndex += 1
|
|
||||||
|
|
||||||
extra = ['ffx'] if sourceFilenameExtension == FfxController.DEFAULT_FILE_EXTENSION else []
|
|
||||||
|
|
||||||
click.echo(f"label={label if label else 'Falsy'}")
|
|
||||||
click.echo(f"sourceFileBasename={sourceFileBasename}")
|
|
||||||
|
|
||||||
targetFilename = (sourceFileBasename if context['use_tmdb']
|
|
||||||
else mediaFileProperties.assembleTargetFileBasename(label,
|
|
||||||
q if len(q_list) > 1 else -1,
|
|
||||||
extraTokens = extra))
|
|
||||||
|
|
||||||
targetPath = os.path.join(output_directory if output_directory else sourceDirectory, targetFilename)
|
|
||||||
|
|
||||||
# media_S01E02_S01E02
|
|
||||||
click.echo(f"targetPath={targetPath}")
|
|
||||||
|
|
||||||
fc.runJob(sourcePath,
|
|
||||||
targetPath,
|
|
||||||
context['video_encoder'],
|
|
||||||
q,
|
|
||||||
preset,
|
|
||||||
denoise)
|
|
||||||
|
|
||||||
#TODO: click.confirm('Warning! This file is not compliant to the defined source schema! Do you want to continue?', abort=True)
|
|
||||||
|
|
||||||
endTime = time.perf_counter()
|
|
||||||
if ctx.obj['verbosity'] > 0:
|
|
||||||
click.echo(f"\nDONE\nTime elapsed {endTime - startTime}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
ffx()
|
|
@ -1,66 +0,0 @@
|
|||||||
import os, re, click
|
|
||||||
|
|
||||||
class FileProperties():
|
|
||||||
|
|
||||||
FILE_EXTENSIONS = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
|
||||||
|
|
||||||
SEASON_EPISODE_INDICATOR_MATCH = '[sS]([0-9]+)[eE]([0-9]+)'
|
|
||||||
EPISODE_INDICATOR_MATCH = '[eE]([0-9]+)'
|
|
||||||
|
|
||||||
def ___init__(self, sourcePath, ):
|
|
||||||
|
|
||||||
|
|
||||||
# Separate basedir, basename and extension for current source file
|
|
||||||
self.__sourceDirectory = os.path.dirname(sourcePath)
|
|
||||||
self.__sourceFilename = os.path.basename(sourcePath)
|
|
||||||
sourcePathTokens = self.__sourceFilename.split('.')
|
|
||||||
|
|
||||||
if sourcePathTokens[-1] in FilenameController.FILE_EXTENSIONS:
|
|
||||||
self.__sourceFileBasename = '.'.join(sourcePathTokens[:-1])
|
|
||||||
self.__sourceFilenameExtension = sourcePathTokens[-1]
|
|
||||||
else:
|
|
||||||
self.__sourceFileBasename = self.__sourceFilename
|
|
||||||
self.__sourceFilenameExtension = ''
|
|
||||||
|
|
||||||
|
|
||||||
se_match = re.compile(FilenameController.SEASON_EPISODE_INDICATOR_MATCH)
|
|
||||||
e_match = re.compile(FilenameController.EPISODE_INDICATOR_MATCH)
|
|
||||||
|
|
||||||
se_result = se_match.search(self.__sourceFilename)
|
|
||||||
e_result = e_match.search(self.__sourceFilename)
|
|
||||||
|
|
||||||
self.__season = -1
|
|
||||||
self.__episode = -1
|
|
||||||
file_index = 0
|
|
||||||
|
|
||||||
if se_result is not None:
|
|
||||||
self.__season = int(se_result.group(1))
|
|
||||||
self.__episode = int(se_result.group(2))
|
|
||||||
elif e_result is not None:
|
|
||||||
self.__episode = int(e_result.group(1))
|
|
||||||
else:
|
|
||||||
file_index += 1
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
matchingFileSubtitleDescriptors = sorted([d for d in availableFileSubtitleDescriptors if d['season'] == season and d['episode'] == episode], key=lambda d: d['stream']) if availableFileSubtitleDescriptors else []
|
|
||||||
|
|
||||||
print(f"season={season} episode={episode} file={file_index}")
|
|
||||||
|
|
||||||
|
|
||||||
# Assemble target filename tokens
|
|
||||||
targetFilenameTokens = []
|
|
||||||
targetFilenameExtension = DEFAULT_FILE_EXTENSION
|
|
||||||
|
|
||||||
if label:
|
|
||||||
targetFilenameTokens = [label]
|
|
||||||
|
|
||||||
if season > -1 and episode > -1:
|
|
||||||
targetFilenameTokens += [f"S{season:0{season_digits}d}E{episode:0{episode_digits}d}"]
|
|
||||||
elif episode > -1:
|
|
||||||
targetFilenameTokens += [f"E{episode:0{episode_digits}d}"]
|
|
||||||
else:
|
|
||||||
targetFilenameTokens += [f"{file_index:0{index_digits}d}"]
|
|
||||||
|
|
||||||
else:
|
|
||||||
targetFilenameTokens = [sourceFileBasename]
|
|
@ -1,48 +0,0 @@
|
|||||||
import os, json
|
|
||||||
|
|
||||||
class ConfigurationController():
|
|
||||||
|
|
||||||
CONFIG_FILENAME = 'ffx.json'
|
|
||||||
DATABASE_FILENAME = 'ffx.db'
|
|
||||||
LOG_FILENAME = 'ffx.log'
|
|
||||||
|
|
||||||
DATABASE_PATH_CONFIG_KEY = 'databasePath'
|
|
||||||
LOG_DIRECTORY_CONFIG_KEY = 'logDirectory'
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
|
|
||||||
self.__homeDir = os.path.expanduser("~")
|
|
||||||
self.__localVarDir = os.path.join(self.__homeDir, '.local', 'var')
|
|
||||||
self.__localEtcDir = os.path.join(self.__homeDir, '.local', 'etc')
|
|
||||||
|
|
||||||
self.__configurationData = {}
|
|
||||||
|
|
||||||
# .local/etc/ffx.json
|
|
||||||
self.__configFilePath = os.path.join(self.__localEtcDir, ConfigurationController.CONFIG_FILENAME)
|
|
||||||
if os.path.isfile(self.__configFilePath):
|
|
||||||
with open(self.__configFilePath, 'r') as configurationFile:
|
|
||||||
self.__configurationData = json.load(configurationFile)
|
|
||||||
|
|
||||||
if ConfigurationController.DATABASE_PATH_CONFIG_KEY in self.__configurationData.keys():
|
|
||||||
self.__databaseFilePath = self.__configurationData[ConfigurationController.DATABASE_PATH_CONFIG_KEY]
|
|
||||||
os.makedirs(os.path.dirname(self.__databaseFilePath), exist_ok=True)
|
|
||||||
else:
|
|
||||||
ffxVarDir = os.path.join(self.__localVarDir, 'ffx')
|
|
||||||
os.makedirs(ffxVarDir, exist_ok=True)
|
|
||||||
self.__databaseFilePath = os.path.join(ffxVarDir, ConfigurationController.DATABASE_FILENAME)
|
|
||||||
|
|
||||||
if ConfigurationController.LOG_DIRECTORY_CONFIG_KEY in self.__configurationData.keys():
|
|
||||||
self.__logDir = self.__configurationData[ConfigurationController.LOG_DIRECTORY_CONFIG_KEY]
|
|
||||||
else:
|
|
||||||
self.__logDir = os.path.join(self.__localVarDir, 'log')
|
|
||||||
os.makedirs(self.__logDir, exist_ok=True)
|
|
||||||
|
|
||||||
|
|
||||||
def getHomeDirectory(self):
|
|
||||||
return self.__homeDir
|
|
||||||
|
|
||||||
def getLogFilePath(self):
|
|
||||||
return os.path.join(self.__logDir, ConfigurationController.LOG_FILENAME)
|
|
||||||
|
|
||||||
def getDatabaseFilePath(self):
|
|
||||||
return self.__databaseFilePath
|
|
@ -1,45 +0,0 @@
|
|||||||
import os
|
|
||||||
|
|
||||||
from sqlalchemy import create_engine
|
|
||||||
from sqlalchemy.orm import sessionmaker
|
|
||||||
|
|
||||||
from ffx.model.show import Base, Show
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
from ffx.model.track import Track
|
|
||||||
|
|
||||||
from ffx.model.media_tag import MediaTag
|
|
||||||
from ffx.model.track_tag import TrackTag
|
|
||||||
|
|
||||||
|
|
||||||
def databaseContext(databasePath: str = ''):
|
|
||||||
|
|
||||||
databaseContext = {}
|
|
||||||
|
|
||||||
if databasePath is None:
|
|
||||||
# sqlite:///:memory:
|
|
||||||
databasePath = ':memory:'
|
|
||||||
elif not databasePath:
|
|
||||||
homeDir = os.path.expanduser("~")
|
|
||||||
ffxVarDir = os.path.join(homeDir, '.local', 'var', 'ffx')
|
|
||||||
if not os.path.exists(ffxVarDir):
|
|
||||||
os.makedirs(ffxVarDir)
|
|
||||||
databasePath = os.path.join(ffxVarDir, 'ffx.db')
|
|
||||||
|
|
||||||
databaseContext['url'] = f"sqlite:///{databasePath}"
|
|
||||||
databaseContext['engine'] = create_engine(databaseContext['url'])
|
|
||||||
databaseContext['session'] = sessionmaker(bind=databaseContext['engine'])
|
|
||||||
|
|
||||||
Base.metadata.create_all(databaseContext['engine'])
|
|
||||||
|
|
||||||
# isSyncronuous = False
|
|
||||||
# while not isSyncronuous:
|
|
||||||
# while True:
|
|
||||||
# try:
|
|
||||||
# with databaseContext['database_engine'].connect() as connection:
|
|
||||||
# connection.execute(sqlalchemy.text('PRAGMA foreign_keys=ON;'))
|
|
||||||
# #isSyncronuous = True
|
|
||||||
# break
|
|
||||||
# except sqlite3.OperationalError:
|
|
||||||
# time.sleep(0.1)
|
|
||||||
|
|
||||||
return databaseContext
|
|
@ -1,257 +0,0 @@
|
|||||||
import os, re, click, json
|
|
||||||
|
|
||||||
from .media_descriptor import MediaDescriptor
|
|
||||||
from .pattern_controller import PatternController
|
|
||||||
|
|
||||||
from .process import executeProcess
|
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
|
||||||
from ffx.ffx_controller import FfxController
|
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
class FileProperties():
|
|
||||||
|
|
||||||
FILE_EXTENSIONS = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
|
||||||
|
|
||||||
SEASON_EPISODE_INDICATOR_MATCH = '[sS]([0-9]+)[eE]([0-9]+)'
|
|
||||||
EPISODE_INDICATOR_MATCH = '[eE]([0-9]+)'
|
|
||||||
|
|
||||||
DEFAULT_INDEX_DIGITS = 3
|
|
||||||
|
|
||||||
def __init__(self, context, sourcePath):
|
|
||||||
|
|
||||||
self.context = context
|
|
||||||
|
|
||||||
self.__logger = context['logger']
|
|
||||||
|
|
||||||
# Separate basedir, basename and extension for current source file
|
|
||||||
self.__sourcePath = sourcePath
|
|
||||||
|
|
||||||
self.__sourceDirectory = os.path.dirname(self.__sourcePath)
|
|
||||||
self.__sourceFilename = os.path.basename(self.__sourcePath)
|
|
||||||
|
|
||||||
sourcePathTokens = self.__sourceFilename.split('.')
|
|
||||||
|
|
||||||
if sourcePathTokens[-1] in FileProperties.FILE_EXTENSIONS:
|
|
||||||
self.__sourceFileBasename = '.'.join(sourcePathTokens[:-1])
|
|
||||||
self.__sourceFilenameExtension = sourcePathTokens[-1]
|
|
||||||
else:
|
|
||||||
self.__sourceFileBasename = self.__sourceFilename
|
|
||||||
self.__sourceFilenameExtension = ''
|
|
||||||
|
|
||||||
|
|
||||||
self.__pc = PatternController(context)
|
|
||||||
|
|
||||||
# db pattern boruto_[sS]([0-9]+)[eE]([0-9]+).mkv
|
|
||||||
|
|
||||||
# Checking if database contains matching pattern
|
|
||||||
matchResult = self.__pc.matchFilename(self.__sourceFilename)
|
|
||||||
|
|
||||||
self.__logger.debug(f"FileProperties.__init__(): Match result {matchResult}")
|
|
||||||
|
|
||||||
self.__pattern: Pattern = matchResult['pattern'] if matchResult else None
|
|
||||||
|
|
||||||
if matchResult:
|
|
||||||
databaseMatchedGroups = matchResult['match'].groups()
|
|
||||||
self.__season = databaseMatchedGroups[0]
|
|
||||||
self.__episode = databaseMatchedGroups[1]
|
|
||||||
|
|
||||||
else:
|
|
||||||
self.__logger.debug(f"FileProperties.__init__(): Checking file name for indicator {self.__sourceFilename}")
|
|
||||||
|
|
||||||
se_match = re.search(FileProperties.SEASON_EPISODE_INDICATOR_MATCH, self.__sourceFilename)
|
|
||||||
e_match = re.search(FileProperties.EPISODE_INDICATOR_MATCH, self.__sourceFilename)
|
|
||||||
|
|
||||||
if se_match is not None:
|
|
||||||
self.__season = int(se_match.group(1))
|
|
||||||
self.__episode = int(se_match.group(2))
|
|
||||||
elif e_match is not None:
|
|
||||||
self.__season = -1
|
|
||||||
self.__episode = int(e_match.group(1))
|
|
||||||
else:
|
|
||||||
self.__season = -1
|
|
||||||
self.__episode = -1
|
|
||||||
|
|
||||||
|
|
||||||
def getFormatData(self):
|
|
||||||
"""
|
|
||||||
"format": {
|
|
||||||
"filename": "Downloads/nagatoro_s02/nagatoro_s01e02.mkv",
|
|
||||||
"nb_streams": 18,
|
|
||||||
"nb_programs": 0,
|
|
||||||
"nb_stream_groups": 0,
|
|
||||||
"format_name": "matroska,webm",
|
|
||||||
"format_long_name": "Matroska / WebM",
|
|
||||||
"start_time": "0.000000",
|
|
||||||
"duration": "1420.063000",
|
|
||||||
"size": "1489169824",
|
|
||||||
"bit_rate": "8389316",
|
|
||||||
"probe_score": 100,
|
|
||||||
"tags": {
|
|
||||||
"PUBLISHER": "Crunchyroll",
|
|
||||||
"ENCODER": "Lavf58.29.100"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
ffprobeOutput, ffprobeError, returnCode = executeProcess(["ffprobe",
|
|
||||||
"-hide_banner",
|
|
||||||
"-show_format",
|
|
||||||
"-of", "json",
|
|
||||||
self.__sourcePath])
|
|
||||||
|
|
||||||
|
|
||||||
if 'Invalid data found when processing input' in ffprobeError:
|
|
||||||
raise Exception(f"File {self.__sourcePath} does not contain valid stream data")
|
|
||||||
|
|
||||||
|
|
||||||
if returnCode != 0:
|
|
||||||
raise Exception(f"ffprobe returned with error {returnCode}")
|
|
||||||
|
|
||||||
|
|
||||||
return json.loads(ffprobeOutput)['format']
|
|
||||||
|
|
||||||
#[{'index': 0, 'codec_name': 'vp9', 'codec_long_name': 'Google VP9', 'profile': 'Profile 0', 'codec_type': 'video', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'width': 1920, 'height': 1080, 'coded_width': 1920, 'coded_height': 1080, 'closed_captions': 0, 'film_grain': 0, 'has_b_frames': 0, 'sample_aspect_ratio': '1:1', 'display_aspect_ratio': '16:9', 'pix_fmt': 'yuv420p', 'level': -99, 'color_range': 'tv', 'chroma_location': 'left', 'field_order': 'progressive', 'refs': 1, 'r_frame_rate': '24000/1001', 'avg_frame_rate': '24000/1001', 'time_base': '1/1000', 'start_pts': 0, 'start_time': '0.000000', 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'BPS': '7974017', 'NUMBER_OF_FRAMES': '34382', 'NUMBER_OF_BYTES': '1429358655', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 libvpx-vp9', 'DURATION': '00:23:54.016000000'}}]
|
|
||||||
#[{'index': 1, 'codec_name': 'opus', 'codec_long_name': 'Opus (Opus Interactive Audio Codec)', 'codec_type': 'audio', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'sample_fmt': 'fltp', 'sample_rate': '48000', 'channels': 2, 'channel_layout': 'stereo', 'bits_per_sample': 0, 'initial_padding': 312, 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'extradata_size': 19, 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'jpn', 'title': 'Japanisch', 'BPS': '128000', 'NUMBER_OF_FRAMES': '61763', 'NUMBER_OF_BYTES': '22946145', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 libopus', 'DURATION': '00:23:54.141000000'}}]
|
|
||||||
|
|
||||||
#[{'index': 2, 'codec_name': 'webvtt', 'codec_long_name': 'WebVTT subtitle', 'codec_type': 'subtitle', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'duration_ts': 1434141, 'duration': '1434.141000', 'disposition': {'default': 1, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'ger', 'title': 'Deutsch [Full]', 'BPS': '118', 'NUMBER_OF_FRAMES': '300', 'NUMBER_OF_BYTES': '21128', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 webvtt', 'DURATION': '00:23:54.010000000'}}, {'index': 3, 'codec_name': 'webvtt', 'codec_long_name': 'WebVTT subtitle', 'codec_type': 'subtitle', 'codec_tag_string': '[0][0][0][0]', 'codec_tag': '0x0000', 'r_frame_rate': '0/0', 'avg_frame_rate': '0/0', 'time_base': '1/1000', 'start_pts': -7, 'start_time': '-0.007000', 'duration_ts': 1434141, 'duration': '1434.141000', 'disposition': {'default': 0, 'dub': 0, 'original': 0, 'comment': 0, 'lyrics': 0, 'karaoke': 0, 'forced': 0, 'hearing_impaired': 0, 'visual_impaired': 0, 'clean_effects': 0, 'attached_pic': 0, 'timed_thumbnails': 0, 'non_diegetic': 0, 'captions': 0, 'descriptions': 0, 'metadata': 0, 'dependent': 0, 'still_image': 0}, 'tags': {'language': 'eng', 'title': 'Englisch [Full]', 'BPS': '101', 'NUMBER_OF_FRAMES': '276', 'NUMBER_OF_BYTES': '16980', '_STATISTICS_WRITING_APP': "mkvmerge v63.0.0 ('Everything') 64-bit", '_STATISTICS_WRITING_DATE_UTC': '2023-10-07 13:59:46', '_STATISTICS_TAGS': 'BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES', 'ENCODER': 'Lavc61.3.100 webvtt', 'DURATION': '00:23:53.230000000'}}]
|
|
||||||
|
|
||||||
|
|
||||||
def getStreamData(self):
|
|
||||||
"""Returns ffprobe stream data as array with elements according to the following example
|
|
||||||
{
|
|
||||||
"index": 4,
|
|
||||||
"codec_name": "hdmv_pgs_subtitle",
|
|
||||||
"codec_long_name": "HDMV Presentation Graphic Stream subtitles",
|
|
||||||
"codec_type": "subtitle",
|
|
||||||
"codec_tag_string": "[0][0][0][0]",
|
|
||||||
"codec_tag": "0x0000",
|
|
||||||
"r_frame_rate": "0/0",
|
|
||||||
"avg_frame_rate": "0/0",
|
|
||||||
"time_base": "1/1000",
|
|
||||||
"start_pts": 0,
|
|
||||||
"start_time": "0.000000",
|
|
||||||
"duration_ts": 1421035,
|
|
||||||
"duration": "1421.035000",
|
|
||||||
"disposition": {
|
|
||||||
"default": 1,
|
|
||||||
"dub": 0,
|
|
||||||
"original": 0,
|
|
||||||
"comment": 0,
|
|
||||||
"lyrics": 0,
|
|
||||||
"karaoke": 0,
|
|
||||||
"forced": 0,
|
|
||||||
"hearing_impaired": 0,
|
|
||||||
"visual_impaired": 0,
|
|
||||||
"clean_effects": 0,
|
|
||||||
"attached_pic": 0,
|
|
||||||
"timed_thumbnails": 0,
|
|
||||||
"non_diegetic": 0,
|
|
||||||
"captions": 0,
|
|
||||||
"descriptions": 0,
|
|
||||||
"metadata": 0,
|
|
||||||
"dependent": 0,
|
|
||||||
"still_image": 0
|
|
||||||
},
|
|
||||||
"tags": {
|
|
||||||
"language": "ger",
|
|
||||||
"title": "German Full"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
ffprobeOutput, ffprobeError, returnCode = executeProcess(["ffprobe",
|
|
||||||
"-hide_banner",
|
|
||||||
"-show_streams",
|
|
||||||
"-of", "json",
|
|
||||||
self.__sourcePath])
|
|
||||||
|
|
||||||
if 'Invalid data found when processing input' in ffprobeError:
|
|
||||||
raise Exception(f"File {self.__sourcePath} does not contain valid stream data")
|
|
||||||
|
|
||||||
|
|
||||||
if returnCode != 0:
|
|
||||||
raise Exception(f"ffprobe returned with error {returnCode}")
|
|
||||||
|
|
||||||
|
|
||||||
return json.loads(ffprobeOutput)['streams']
|
|
||||||
|
|
||||||
|
|
||||||
def getMediaDescriptor(self):
|
|
||||||
return MediaDescriptor.fromFfprobe(self.context, self.getFormatData(), self.getStreamData())
|
|
||||||
|
|
||||||
|
|
||||||
def getShowId(self) -> int:
|
|
||||||
"""Result is -1 if the filename did not match anything in database"""
|
|
||||||
return self.__pattern.getShowId() if self.__pattern is not None else -1
|
|
||||||
|
|
||||||
def getPattern(self) -> Pattern:
|
|
||||||
"""Result is None if the filename did not match anything in database"""
|
|
||||||
return self.__pattern
|
|
||||||
|
|
||||||
|
|
||||||
def getSeason(self):
|
|
||||||
return int(self.__season)
|
|
||||||
|
|
||||||
def getEpisode(self):
|
|
||||||
return int(self.__episode)
|
|
||||||
|
|
||||||
|
|
||||||
def getFilename(self):
|
|
||||||
return self.__sourceFilename
|
|
||||||
|
|
||||||
def getFileBasename(self):
|
|
||||||
return self.__sourceFileBasename
|
|
||||||
|
|
||||||
|
|
||||||
def assembleTargetFileBasename(self,
|
|
||||||
label: str = "",
|
|
||||||
quality: int = -1,
|
|
||||||
fileIndex: int = -1,
|
|
||||||
indexDigits: int = DEFAULT_INDEX_DIGITS,
|
|
||||||
extraTokens: list = []):
|
|
||||||
|
|
||||||
if 'show_descriptor' in self.context.keys():
|
|
||||||
season_digits = self.context['show_descriptor'][ShowDescriptor.INDICATOR_SEASON_DIGITS_KEY]
|
|
||||||
episode_digits = self.context['show_descriptor'][ShowDescriptor.INDICATOR_EPISODE_DIGITS_KEY]
|
|
||||||
else:
|
|
||||||
season_digits = ShowDescriptor.DEFAULT_INDICATOR_SEASON_DIGITS
|
|
||||||
episode_digits = ShowDescriptor.DEFAULT_INDICATOR_EPISODE_DIGITS
|
|
||||||
|
|
||||||
targetFilenameTokens = []
|
|
||||||
|
|
||||||
# targetFilenameExtension = FfxController.DEFAULT_FILE_EXTENSION if extension is None else str(extension)
|
|
||||||
|
|
||||||
click.echo(f"assembleTargetFileBasename(): label={label} is {'truthy' if label else 'falsy'}")
|
|
||||||
|
|
||||||
if label:
|
|
||||||
|
|
||||||
targetFilenameTokens = [label]
|
|
||||||
|
|
||||||
if fileIndex > -1:
|
|
||||||
targetFilenameTokens += [f"{fileIndex:0{indexDigits}d}"]
|
|
||||||
elif self.__season > -1 and self.__episode > -1:
|
|
||||||
targetFilenameTokens += [f"S{self.__season:0{season_digits}d}E{self.__episode:0{episode_digits}d}"]
|
|
||||||
elif self.__episode > -1:
|
|
||||||
targetFilenameTokens += [f"E{self.__episode:0{episode_digits}d}"]
|
|
||||||
|
|
||||||
else:
|
|
||||||
targetFilenameTokens = [self.__sourceFileBasename]
|
|
||||||
|
|
||||||
|
|
||||||
if quality != -1:
|
|
||||||
targetFilenameTokens += [f"q{quality}"]
|
|
||||||
|
|
||||||
# In case source and target filenames are the same add an extension to distinct output from input
|
|
||||||
#if not label and self.__sourceFilenameExtension == targetFilenameExtension:
|
|
||||||
# targetFilenameTokens += ['ffx']
|
|
||||||
targetFilenameTokens += extraTokens
|
|
||||||
|
|
||||||
targetFilename = '_'.join(targetFilenameTokens)
|
|
||||||
|
|
||||||
#self.__logger.debug(f"assembleTargetFileBasename(): Target filename: {targetFilename}")
|
|
||||||
click.echo(f"assembleTargetFileBasename(): Target filename: {targetFilename}")
|
|
||||||
|
|
||||||
return targetFilename
|
|
@ -1,61 +0,0 @@
|
|||||||
import click
|
|
||||||
|
|
||||||
DIFF_ADDED_KEY = 'added'
|
|
||||||
DIFF_REMOVED_KEY = 'removed'
|
|
||||||
DIFF_CHANGED_KEY = 'changed'
|
|
||||||
DIFF_UNCHANGED_KEY = 'unchanged'
|
|
||||||
|
|
||||||
def dictDiff(a : dict, b : dict):
|
|
||||||
|
|
||||||
a_keys = set(a.keys())
|
|
||||||
b_keys = set(b.keys())
|
|
||||||
|
|
||||||
a_only = a_keys - b_keys
|
|
||||||
b_only = b_keys - a_keys
|
|
||||||
a_b = a_keys & b_keys
|
|
||||||
|
|
||||||
changed = {k for k in a_b if a[k] != b[k]}
|
|
||||||
|
|
||||||
diffResult = {}
|
|
||||||
|
|
||||||
|
|
||||||
if a_only:
|
|
||||||
diffResult[DIFF_REMOVED_KEY] = a_only
|
|
||||||
diffResult[DIFF_UNCHANGED_KEY] = b_keys
|
|
||||||
if b_only:
|
|
||||||
diffResult[DIFF_ADDED_KEY] = b_only
|
|
||||||
if changed:
|
|
||||||
diffResult[DIFF_CHANGED_KEY] = changed
|
|
||||||
|
|
||||||
return diffResult
|
|
||||||
|
|
||||||
def dictCache(element: dict, cache: list = []):
|
|
||||||
for index in range(len(cache)):
|
|
||||||
diff = dictDiff(cache[index], element)
|
|
||||||
if not diff:
|
|
||||||
return index, cache
|
|
||||||
cache.append(element)
|
|
||||||
return -1, cache
|
|
||||||
|
|
||||||
|
|
||||||
def setDiff(a : set, b : set) -> set:
|
|
||||||
|
|
||||||
a_only = a - b
|
|
||||||
b_only = b - a
|
|
||||||
|
|
||||||
diffResult = {}
|
|
||||||
|
|
||||||
if a_only:
|
|
||||||
diffResult[DIFF_REMOVED_KEY] = a_only
|
|
||||||
if b_only:
|
|
||||||
diffResult[DIFF_ADDED_KEY] = b_only
|
|
||||||
|
|
||||||
return diffResult
|
|
||||||
|
|
||||||
|
|
||||||
def filterFilename(fileName: str) -> str:
|
|
||||||
"""This filter replaces charactes from TMDB responses with characters
|
|
||||||
less problemating when using in filenames or removes them"""
|
|
||||||
|
|
||||||
fileName = str(fileName).replace(':', ';')
|
|
||||||
return fileName
|
|
@ -1,9 +0,0 @@
|
|||||||
import subprocess
|
|
||||||
from typing import List
|
|
||||||
|
|
||||||
def executeProcess(commandSequence: List[str], directory: str = None):
|
|
||||||
# process = subprocess.Popen([t.encode('utf-8') for t in commandSequence], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
|
||||||
process = subprocess.Popen(commandSequence, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='utf-8', cwd = directory)
|
|
||||||
output, error = process.communicate()
|
|
||||||
# return output.decode('utf-8'), error.decode('utf-8'), process.returncode
|
|
||||||
return output, error, process.returncode
|
|
@ -1,114 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_2 import DispositionCombinator2
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator21(DispositionCombinator2):
|
|
||||||
|
|
||||||
VARIANT = 'D10'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator21.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set([TrackDisposition.DEFAULT])
|
|
||||||
else:
|
|
||||||
subtrack0 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack1 = set()
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
|
||||||
subtrack1.add(TrackDisposition.FORCED) # DESCRIPTIONS
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1)
|
|
||||||
|
|
||||||
def createAssertFunc(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
|
|
||||||
if self._context['use_jellyfin']:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 1
|
|
||||||
assert not (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set 'forced' disposition"
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved set 'comment' disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set 'comment' disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert not (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved set 'forced' disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
if self._context['use_jellyfin']:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not set default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
|
|
||||||
return f
|
|
||||||
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
@ -1,147 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_3 import DispositionCombinator3
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator31(DispositionCombinator3):
|
|
||||||
|
|
||||||
VARIANT = 'D100'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator31.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set()
|
|
||||||
subtrack2 = set([TrackDisposition.DEFAULT])
|
|
||||||
else:
|
|
||||||
subtrack0 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack1 = set()
|
|
||||||
subtrack2 = set()
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
|
||||||
subtrack1.add(TrackDisposition.FORCED) # DESCRIPTIONS
|
|
||||||
# subtrack2.add(TrackDisposition.HEARING_IMPAIRED) # HEARING_IMPAIRED
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1,
|
|
||||||
subtrack2)
|
|
||||||
|
|
||||||
def createAssertFunc(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
|
|
||||||
if self._context['use_jellyfin']:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved descriptions disposition"
|
|
||||||
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.HEARING_IMPAIRED)
|
|
||||||
# ), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved set default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
def f(assertObj: dict):
|
|
||||||
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.HEARING_IMPAIRED)
|
|
||||||
# ), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
if self._context['use_jellyfin']:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
def f(assertObj: dict):
|
|
||||||
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
|
||||||
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
|
||||||
return f
|
|
||||||
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
@ -1,132 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_3 import DispositionCombinator3
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator32(DispositionCombinator3):
|
|
||||||
|
|
||||||
VARIANT = 'D010'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator32.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack1 = set()
|
|
||||||
subtrack2 = set()
|
|
||||||
else:
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack2 = set()
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
|
||||||
# subtrack1.add(TrackDisposition.DESCRIPTIONS) # DESCRIPTIONS
|
|
||||||
subtrack2.add(TrackDisposition.FORCED) # HEARING_IMPAIRED
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1,
|
|
||||||
subtrack2)
|
|
||||||
|
|
||||||
|
|
||||||
def createAssertFunc(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
if self._context['use_jellyfin']:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set default disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DESCRIPTIONS)
|
|
||||||
# ), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved descriptions disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DESCRIPTIONS)
|
|
||||||
# ), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved descriptions disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
if self._context['use_jellyfin']:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not set default disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
|
||||||
return f
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
@ -1,132 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect
|
|
||||||
|
|
||||||
from ffx.track_disposition import TrackDisposition
|
|
||||||
from .disposition_combinator_3 import DispositionCombinator3
|
|
||||||
|
|
||||||
|
|
||||||
class DispositionCombinator33(DispositionCombinator3):
|
|
||||||
|
|
||||||
VARIANT = 'D001'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return DispositionCombinator33.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set([TrackDisposition.DEFAULT])
|
|
||||||
subtrack2 = set()
|
|
||||||
else:
|
|
||||||
subtrack0 = set()
|
|
||||||
subtrack1 = set()
|
|
||||||
subtrack2 = set([TrackDisposition.DEFAULT])
|
|
||||||
|
|
||||||
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
|
||||||
# so some checks for preserved dispositions are omitted for now
|
|
||||||
if self.__createPresets:
|
|
||||||
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
|
||||||
subtrack1.add(TrackDisposition.FORCED) # DESCRIPTIONS
|
|
||||||
# subtrack2.add(TrackDisposition.HEARING_IMPAIRED) # HEARING_IMPAIRED
|
|
||||||
|
|
||||||
return (subtrack0,
|
|
||||||
subtrack1,
|
|
||||||
subtrack2)
|
|
||||||
|
|
||||||
|
|
||||||
def createAssertFunc(self):
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
if self._context['use_jellyfin']:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set default disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.HEARING_IMPAIRED)
|
|
||||||
# ), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved descriptions disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
|
||||||
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.FORCED)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved descriptions disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
|
||||||
# assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.HEARING_IMPAIRED)
|
|
||||||
# ), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
if self._context['use_jellyfin']:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
def f(assertObj: dict):
|
|
||||||
if not 'tracks' in assertObj.keys():
|
|
||||||
raise KeyError("assertObj does not contain key 'tracks'")
|
|
||||||
trackDescriptors = assertObj['tracks']
|
|
||||||
|
|
||||||
# source subIndex 0
|
|
||||||
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 1
|
|
||||||
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
|
||||||
# source subIndex 2
|
|
||||||
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
|
||||||
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
|
||||||
return f
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
@ -1,33 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
class JellyfinCombinator():
|
|
||||||
|
|
||||||
IDENTIFIER = 'jellyfin'
|
|
||||||
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
def getIdentifier(self):
|
|
||||||
return JellyfinCombinator.IDENTIFIER
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def list():
|
|
||||||
basePath = os.path.dirname(__file__)
|
|
||||||
return [os.path.basename(p)[20:-3]
|
|
||||||
for p
|
|
||||||
in glob.glob(f"{ basePath }/jellyfin_combinator_*.py", recursive = True)
|
|
||||||
if p != __file__]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getClassReference(identifier):
|
|
||||||
importlib.import_module(f"ffx.test.jellyfin_combinator_{ identifier }")
|
|
||||||
for name, obj in inspect.getmembers(sys.modules[f"ffx.test.jellyfin_combinator_{ identifier }"]):
|
|
||||||
#HINT: Excluding MediaCombinator as it seems to be included by import (?)
|
|
||||||
if inspect.isclass(obj) and name != 'JellyfinCombinator' and name.startswith('JellyfinCombinator'):
|
|
||||||
return obj
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def getAllClassReferences():
|
|
||||||
return [JellyfinCombinator.getClassReference(i) for i in JellyfinCombinator.list()]
|
|
@ -1,34 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
|
|
||||||
from .jellyfin_combinator import JellyfinCombinator
|
|
||||||
|
|
||||||
|
|
||||||
class JellyfinCombinator0(JellyfinCombinator):
|
|
||||||
|
|
||||||
VARIANT = 'J0'
|
|
||||||
|
|
||||||
# def __init__(self, SubCombinators: dict = {}, context = None):
|
|
||||||
def __init__(self, context = None):
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
# self._SubCombinators = SubCombinators
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return JellyfinCombinator0.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
return False
|
|
||||||
|
|
||||||
def assertFunc(self, testObj = {}):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
@ -1,34 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
|
|
||||||
from .jellyfin_combinator import JellyfinCombinator
|
|
||||||
|
|
||||||
class JellyfinCombinator1(JellyfinCombinator):
|
|
||||||
|
|
||||||
VARIANT = 'J1'
|
|
||||||
|
|
||||||
# def __init__(self, SubCombinators: dict = {}, context = None):
|
|
||||||
def __init__(self, context = None):
|
|
||||||
|
|
||||||
self._context = context
|
|
||||||
self._logger = context['logger']
|
|
||||||
self._reportLogger = context['report_logger']
|
|
||||||
|
|
||||||
# self._SubCombinators = SubCombinations
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return JellyfinCombinator1.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self):
|
|
||||||
return True
|
|
||||||
|
|
||||||
def assertFunc(self, testObj = {}):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
@ -1,166 +0,0 @@
|
|||||||
import os, sys, importlib, glob, inspect, itertools, click
|
|
||||||
|
|
||||||
from ffx.track_type import TrackType
|
|
||||||
|
|
||||||
from ffx.track_descriptor import TrackDescriptor
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
|
||||||
|
|
||||||
from .media_combinator import MediaCombinator
|
|
||||||
|
|
||||||
from .disposition_combinator_2 import DispositionCombinator2
|
|
||||||
from .track_tag_combinator_2 import TrackTagCombinator2
|
|
||||||
from .jellyfin_combinator import JellyfinCombinator
|
|
||||||
from .media_tag_combinator import MediaTagCombinator
|
|
||||||
|
|
||||||
|
|
||||||
class MediaCombinator2(MediaCombinator):
|
|
||||||
|
|
||||||
VARIANT = 'VASS'
|
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, context = None,
|
|
||||||
createPresets: bool = False):
|
|
||||||
super().__init__(context)
|
|
||||||
|
|
||||||
self.__createPresets = createPresets
|
|
||||||
|
|
||||||
def getVariant(self):
|
|
||||||
return MediaCombinator2.VARIANT
|
|
||||||
|
|
||||||
|
|
||||||
def getPayload(self,
|
|
||||||
subtitleDispositionTuple = (set(), set()),
|
|
||||||
subtitleTagTuple = ({}, {})):
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
kwargs[TrackDescriptor.CONTEXT_KEY] = self._context
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = 0
|
|
||||||
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = 0
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.VIDEO
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = 0
|
|
||||||
trackDescriptor0 = TrackDescriptor(**kwargs)
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
kwargs[TrackDescriptor.CONTEXT_KEY] = self._context
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = 1
|
|
||||||
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = 1
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.AUDIO
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = 0
|
|
||||||
trackDescriptor1 = TrackDescriptor(**kwargs)
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
kwargs[TrackDescriptor.CONTEXT_KEY] = self._context
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = 2
|
|
||||||
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = 2
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.SUBTITLE
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = 0
|
|
||||||
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = subtitleDispositionTuple[0]
|
|
||||||
kwargs[TrackDescriptor.TAGS_KEY] = subtitleTagTuple[0]
|
|
||||||
trackDescriptor2 = TrackDescriptor(**kwargs)
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
kwargs[TrackDescriptor.CONTEXT_KEY] = self._context
|
|
||||||
kwargs[TrackDescriptor.INDEX_KEY] = 3
|
|
||||||
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = 3
|
|
||||||
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.SUBTITLE
|
|
||||||
kwargs[TrackDescriptor.SUB_INDEX_KEY] = 1
|
|
||||||
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = subtitleDispositionTuple[1]
|
|
||||||
kwargs[TrackDescriptor.TAGS_KEY] = subtitleTagTuple[1]
|
|
||||||
trackDescriptor3 = TrackDescriptor(**kwargs)
|
|
||||||
|
|
||||||
kwargs = {}
|
|
||||||
kwargs[MediaDescriptor.CONTEXT_KEY] = self._context
|
|
||||||
kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY] = [trackDescriptor0,
|
|
||||||
trackDescriptor1,
|
|
||||||
trackDescriptor2,
|
|
||||||
trackDescriptor3]
|
|
||||||
|
|
||||||
mediaDescriptor = MediaDescriptor(**kwargs)
|
|
||||||
# mediaDescriptor.reindexSubIndices()
|
|
||||||
|
|
||||||
return mediaDescriptor
|
|
||||||
|
|
||||||
|
|
||||||
def assertFunc(self, testObj = {}):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def shouldFail(self):
|
|
||||||
return False
|
|
||||||
|
|
||||||
def getYield(self):
|
|
||||||
|
|
||||||
for MTC in MediaTagCombinator.getAllClassReferences():
|
|
||||||
for DC2 in DispositionCombinator2.getAllClassReferences():
|
|
||||||
for TC2 in TrackTagCombinator2.getAllClassReferences():
|
|
||||||
for J in JellyfinCombinator.getAllClassReferences():
|
|
||||||
|
|
||||||
j = J(self._context)
|
|
||||||
self._context['use_jellyfin'] = j.getPayload()
|
|
||||||
|
|
||||||
dc2 = DC2(self._context)
|
|
||||||
tc2 = TC2(self._context)
|
|
||||||
|
|
||||||
mtc = MTC(self._context)
|
|
||||||
|
|
||||||
yObj = {}
|
|
||||||
|
|
||||||
yObj['identifier'] = self.getIdentifier()
|
|
||||||
yObj['variants'] = [self.getVariant(),
|
|
||||||
f"S:{dc2.getVariant()}",
|
|
||||||
f"S:{tc2.getVariant()}",
|
|
||||||
mtc.getVariant(),
|
|
||||||
j.getVariant()]
|
|
||||||
|
|
||||||
yObj['payload'] = self.getPayload(dc2.getPayload(),
|
|
||||||
tc2.getPayload())
|
|
||||||
|
|
||||||
yObj['assertSelectors'] = ['M', 'SD', 'ST', 'MT', 'J']
|
|
||||||
yObj['assertFuncs'] = [self.assertFunc,
|
|
||||||
dc2.createAssertFunc(),
|
|
||||||
tc2.createAssertFunc(),
|
|
||||||
mtc.createAssertFunc(),
|
|
||||||
j.assertFunc]
|
|
||||||
|
|
||||||
yObj['shouldFail'] = (self.shouldFail()
|
|
||||||
| dc2.shouldFail()
|
|
||||||
| tc2.shouldFail()
|
|
||||||
| mtc.shouldFail()
|
|
||||||
| j.shouldFail())
|
|
||||||
|
|
||||||
yieldObj = {'target': yObj}
|
|
||||||
|
|
||||||
if self.__createPresets:
|
|
||||||
|
|
||||||
dc2_p = DC2(self._context, createPresets = True)
|
|
||||||
tc2_p = TC2(self._context, createPresets = True)
|
|
||||||
|
|
||||||
mtc_p = MTC(self._context, createPresets = True)
|
|
||||||
|
|
||||||
yObj_p = {}
|
|
||||||
|
|
||||||
yObj_p['identifier'] = self.getIdentifier()
|
|
||||||
yObj_p['variants'] = [self.getVariant(),
|
|
||||||
f"S:{dc2_p.getVariant()}",
|
|
||||||
f"S:{tc2_p.getVariant()}",
|
|
||||||
mtc_p.getVariant(),
|
|
||||||
j.getVariant()]
|
|
||||||
|
|
||||||
yObj_p['payload'] = self.getPayload(dc2_p.getPayload(),
|
|
||||||
tc2_p.getPayload())
|
|
||||||
|
|
||||||
yObj_p['assertSelectors'] = ['M', 'SD', 'ST', 'MT', 'J']
|
|
||||||
yObj_p['assertFuncs'] = [self.assertFunc,
|
|
||||||
dc2_p.createAssertFunc(),
|
|
||||||
tc2_p.createAssertFunc(),
|
|
||||||
mtc_p.createAssertFunc(),
|
|
||||||
j.assertFunc]
|
|
||||||
|
|
||||||
yObj_p['shouldFail'] = (self.shouldFail()
|
|
||||||
| dc2_p.shouldFail()
|
|
||||||
| tc2_p.shouldFail()
|
|
||||||
| mtc_p.shouldFail()
|
|
||||||
| j.shouldFail())
|
|
||||||
|
|
||||||
yieldObj['preset'] = yObj_p
|
|
||||||
|
|
||||||
yield yieldObj
|
|
@ -0,0 +1,38 @@
|
|||||||
|
[project]
|
||||||
|
name = "ffx"
|
||||||
|
description = "FFX recoding and metadata managing tool"
|
||||||
|
version = "0.2.3"
|
||||||
|
license = {file = "LICENSE.md"}
|
||||||
|
dependencies = [
|
||||||
|
"requests",
|
||||||
|
"jinja2",
|
||||||
|
"click",
|
||||||
|
"textual",
|
||||||
|
"sqlalchemy",
|
||||||
|
]
|
||||||
|
readme = {file = "README.md", content-type = "text/markdown"}
|
||||||
|
authors = [
|
||||||
|
{name = "Marius", email = "javanaut@maveno.de"}
|
||||||
|
]
|
||||||
|
maintainers = [
|
||||||
|
{name = "Marius", email = "javanaut@maveno.de"}
|
||||||
|
]
|
||||||
|
classifiers = [
|
||||||
|
"Development Status :: 3 - Alpha",
|
||||||
|
"Programming Language :: Python"
|
||||||
|
]
|
||||||
|
|
||||||
|
[project.urls]
|
||||||
|
Homepage = "https://gitea.maveno.de/Javanaut/ffx"
|
||||||
|
Repository = "https://gitea.maveno.de/Javanaut/ffx.git"
|
||||||
|
Issues = "https://gitea.maveno.de/Javanaut/ffx/issues"
|
||||||
|
|
||||||
|
[build-system]
|
||||||
|
requires = [
|
||||||
|
"setuptools",
|
||||||
|
"wheel"
|
||||||
|
]
|
||||||
|
build-backend = "setuptools.build_meta"
|
||||||
|
|
||||||
|
[project.scripts]
|
||||||
|
ffx = "ffx.ffx:ffx"
|
@ -0,0 +1,142 @@
|
|||||||
|
import os, json
|
||||||
|
|
||||||
|
class ConfigurationController():
|
||||||
|
|
||||||
|
CONFIG_FILENAME = 'ffx.json'
|
||||||
|
DATABASE_FILENAME = 'ffx.db'
|
||||||
|
LOG_FILENAME = 'ffx.log'
|
||||||
|
|
||||||
|
DATABASE_PATH_CONFIG_KEY = 'databasePath'
|
||||||
|
LOG_DIRECTORY_CONFIG_KEY = 'logDirectory'
|
||||||
|
OUTPUT_FILENAME_TEMPLATE_KEY = 'outputFilenameTemplate'
|
||||||
|
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
|
||||||
|
self.__homeDir = os.path.expanduser("~")
|
||||||
|
self.__localVarDir = os.path.join(self.__homeDir, '.local', 'var')
|
||||||
|
self.__localEtcDir = os.path.join(self.__homeDir, '.local', 'etc')
|
||||||
|
|
||||||
|
self.__configurationData = {}
|
||||||
|
|
||||||
|
# .local/etc/ffx.json
|
||||||
|
self.__configFilePath = os.path.join(self.__localEtcDir, ConfigurationController.CONFIG_FILENAME)
|
||||||
|
if os.path.isfile(self.__configFilePath):
|
||||||
|
with open(self.__configFilePath, 'r') as configurationFile:
|
||||||
|
self.__configurationData = json.load(configurationFile)
|
||||||
|
|
||||||
|
if ConfigurationController.DATABASE_PATH_CONFIG_KEY in self.__configurationData.keys():
|
||||||
|
self.__databaseFilePath = self.__configurationData[ConfigurationController.DATABASE_PATH_CONFIG_KEY]
|
||||||
|
os.makedirs(os.path.dirname(self.__databaseFilePath), exist_ok=True)
|
||||||
|
else:
|
||||||
|
ffxVarDir = os.path.join(self.__localVarDir, 'ffx')
|
||||||
|
os.makedirs(ffxVarDir, exist_ok=True)
|
||||||
|
self.__databaseFilePath = os.path.join(ffxVarDir, ConfigurationController.DATABASE_FILENAME)
|
||||||
|
|
||||||
|
if ConfigurationController.LOG_DIRECTORY_CONFIG_KEY in self.__configurationData.keys():
|
||||||
|
self.__logDir = self.__configurationData[ConfigurationController.LOG_DIRECTORY_CONFIG_KEY]
|
||||||
|
else:
|
||||||
|
self.__logDir = os.path.join(self.__localVarDir, 'log')
|
||||||
|
os.makedirs(self.__logDir, exist_ok=True)
|
||||||
|
|
||||||
|
|
||||||
|
def getHomeDirectory(self):
|
||||||
|
return self.__homeDir
|
||||||
|
|
||||||
|
def getLogFilePath(self):
|
||||||
|
return os.path.join(self.__logDir, ConfigurationController.LOG_FILENAME)
|
||||||
|
|
||||||
|
def getDatabaseFilePath(self):
|
||||||
|
return self.__databaseFilePath
|
||||||
|
|
||||||
|
|
||||||
|
def getData(self):
|
||||||
|
return self.__configurationData
|
||||||
|
|
||||||
|
|
||||||
|
#
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# def addPattern(self, patternDescriptor):
|
||||||
|
#
|
||||||
|
# try:
|
||||||
|
#
|
||||||
|
# s = self.Session()
|
||||||
|
# q = s.query(Pattern).filter(Pattern.show_id == int(patternDescriptor['show_id']),
|
||||||
|
# Pattern.pattern == str(patternDescriptor['pattern']))
|
||||||
|
#
|
||||||
|
# if not q.count():
|
||||||
|
# pattern = Pattern(show_id = int(patternDescriptor['show_id']),
|
||||||
|
# pattern = str(patternDescriptor['pattern']))
|
||||||
|
# s.add(pattern)
|
||||||
|
# s.commit()
|
||||||
|
# return pattern.getId()
|
||||||
|
# else:
|
||||||
|
# return 0
|
||||||
|
#
|
||||||
|
# except Exception as ex:
|
||||||
|
# raise click.ClickException(f"PatternController.addPattern(): {repr(ex)}")
|
||||||
|
# finally:
|
||||||
|
# s.close()
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# def updatePattern(self, patternId, patternDescriptor):
|
||||||
|
#
|
||||||
|
# try:
|
||||||
|
# s = self.Session()
|
||||||
|
# q = s.query(Pattern).filter(Pattern.id == int(patternId))
|
||||||
|
#
|
||||||
|
# if q.count():
|
||||||
|
#
|
||||||
|
# pattern = q.first()
|
||||||
|
#
|
||||||
|
# pattern.show_id = int(patternDescriptor['show_id'])
|
||||||
|
# pattern.pattern = str(patternDescriptor['pattern'])
|
||||||
|
#
|
||||||
|
# s.commit()
|
||||||
|
# return True
|
||||||
|
#
|
||||||
|
# else:
|
||||||
|
# return False
|
||||||
|
#
|
||||||
|
# except Exception as ex:
|
||||||
|
# raise click.ClickException(f"PatternController.updatePattern(): {repr(ex)}")
|
||||||
|
# finally:
|
||||||
|
# s.close()
|
||||||
|
#
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# def findPattern(self, patternDescriptor):
|
||||||
|
#
|
||||||
|
# try:
|
||||||
|
# s = self.Session()
|
||||||
|
# q = s.query(Pattern).filter(Pattern.show_id == int(patternDescriptor['show_id']), Pattern.pattern == str(patternDescriptor['pattern']))
|
||||||
|
#
|
||||||
|
# if q.count():
|
||||||
|
# pattern = q.first()
|
||||||
|
# return int(pattern.id)
|
||||||
|
# else:
|
||||||
|
# return None
|
||||||
|
#
|
||||||
|
# except Exception as ex:
|
||||||
|
# raise click.ClickException(f"PatternController.findPattern(): {repr(ex)}")
|
||||||
|
# finally:
|
||||||
|
# s.close()
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# def getPattern(self, patternId : int):
|
||||||
|
#
|
||||||
|
# if type(patternId) is not int:
|
||||||
|
# raise ValueError(f"PatternController.getPattern(): Argument patternId is required to be of type int")
|
||||||
|
#
|
||||||
|
# try:
|
||||||
|
# s = self.Session()
|
||||||
|
# q = s.query(Pattern).filter(Pattern.id == int(patternId))
|
||||||
|
#
|
||||||
|
# return q.first() if q.count() else None
|
||||||
|
#
|
||||||
|
# except Exception as ex:
|
||||||
|
# raise click.ClickException(f"PatternController.getPattern(): {repr(ex)}")
|
||||||
|
# finally:
|
||||||
|
# s.close()
|
||||||
|
#
|
@ -0,0 +1,15 @@
|
|||||||
|
VERSION='0.2.3'
|
||||||
|
DATABASE_VERSION = 2
|
||||||
|
|
||||||
|
DEFAULT_QUALITY = 32
|
||||||
|
DEFAULT_AV1_PRESET = 5
|
||||||
|
|
||||||
|
DEFAULT_STEREO_BANDWIDTH = "112"
|
||||||
|
DEFAULT_AC3_BANDWIDTH = "256"
|
||||||
|
DEFAULT_DTS_BANDWIDTH = "320"
|
||||||
|
DEFAULT_7_1_BANDWIDTH = "384"
|
||||||
|
|
||||||
|
DEFAULT_CROP_START = 60
|
||||||
|
DEFAULT_CROP_LENGTH = 180
|
||||||
|
|
||||||
|
DEFAULT_OUTPUT_FILENAME_TEMPLATE = '{{ ffx_show_name }} - {{ ffx_index }}{{ ffx_index_separator }}{{ ffx_episode_name }}{{ ffx_indicator_separator }}{{ ffx_indicator }}'
|
@ -0,0 +1,102 @@
|
|||||||
|
import os, click
|
||||||
|
|
||||||
|
from sqlalchemy import create_engine
|
||||||
|
from sqlalchemy.orm import sessionmaker
|
||||||
|
|
||||||
|
from ffx.model.show import Base
|
||||||
|
|
||||||
|
from ffx.model.property import Property
|
||||||
|
|
||||||
|
from ffx.constants import DATABASE_VERSION
|
||||||
|
|
||||||
|
|
||||||
|
DATABASE_VERSION_KEY = 'database_version'
|
||||||
|
|
||||||
|
class DatabaseVersionException(Exception):
|
||||||
|
def __init__(self, errorMessage):
|
||||||
|
super().__init__(errorMessage)
|
||||||
|
|
||||||
|
def databaseContext(databasePath: str = ''):
|
||||||
|
|
||||||
|
databaseContext = {}
|
||||||
|
|
||||||
|
if databasePath is None:
|
||||||
|
# sqlite:///:memory:
|
||||||
|
databasePath = ':memory:'
|
||||||
|
elif not databasePath:
|
||||||
|
homeDir = os.path.expanduser("~")
|
||||||
|
ffxVarDir = os.path.join(homeDir, '.local', 'var', 'ffx')
|
||||||
|
if not os.path.exists(ffxVarDir):
|
||||||
|
os.makedirs(ffxVarDir)
|
||||||
|
databasePath = os.path.join(ffxVarDir, 'ffx.db')
|
||||||
|
|
||||||
|
databaseContext['url'] = f"sqlite:///{databasePath}"
|
||||||
|
databaseContext['engine'] = create_engine(databaseContext['url'])
|
||||||
|
databaseContext['session'] = sessionmaker(bind=databaseContext['engine'])
|
||||||
|
|
||||||
|
Base.metadata.create_all(databaseContext['engine'])
|
||||||
|
|
||||||
|
# isSyncronuous = False
|
||||||
|
# while not isSyncronuous:
|
||||||
|
# while True:
|
||||||
|
# try:
|
||||||
|
# with databaseContext['database_engine'].connect() as connection:
|
||||||
|
# connection.execute(sqlalchemy.text('PRAGMA foreign_keys=ON;'))
|
||||||
|
# #isSyncronuous = True
|
||||||
|
# break
|
||||||
|
# except sqlite3.OperationalError:
|
||||||
|
# time.sleep(0.1)
|
||||||
|
|
||||||
|
ensureDatabaseVersion(databaseContext)
|
||||||
|
|
||||||
|
return databaseContext
|
||||||
|
|
||||||
|
def ensureDatabaseVersion(databaseContext):
|
||||||
|
|
||||||
|
currentDatabaseVersion = getDatabaseVersion(databaseContext)
|
||||||
|
if currentDatabaseVersion:
|
||||||
|
if currentDatabaseVersion != DATABASE_VERSION:
|
||||||
|
raise DatabaseVersionException(f"Current database version ({currentDatabaseVersion}) does not match required ({DATABASE_VERSION})")
|
||||||
|
else:
|
||||||
|
setDatabaseVersion(databaseContext, DATABASE_VERSION)
|
||||||
|
|
||||||
|
|
||||||
|
def getDatabaseVersion(databaseContext):
|
||||||
|
|
||||||
|
try:
|
||||||
|
|
||||||
|
Session = databaseContext['session']
|
||||||
|
s = Session()
|
||||||
|
q = s.query(Property).filter(Property.key == DATABASE_VERSION_KEY)
|
||||||
|
|
||||||
|
return int(q.first().value) if q.count() else 0
|
||||||
|
|
||||||
|
except Exception as ex:
|
||||||
|
raise click.ClickException(f"getDatabaseVersion(): {repr(ex)}")
|
||||||
|
finally:
|
||||||
|
s.close()
|
||||||
|
|
||||||
|
|
||||||
|
def setDatabaseVersion(databaseContext, databaseVersion: int):
|
||||||
|
|
||||||
|
try:
|
||||||
|
Session = databaseContext['session']
|
||||||
|
s = Session()
|
||||||
|
|
||||||
|
q = s.query(Property).filter(Property.key == DATABASE_VERSION_KEY)
|
||||||
|
|
||||||
|
dbVersion = int(databaseVersion)
|
||||||
|
|
||||||
|
versionProperty = q.first()
|
||||||
|
if versionProperty:
|
||||||
|
versionProperty.value = str(dbVersion)
|
||||||
|
else:
|
||||||
|
versionProperty = Property(key = DATABASE_VERSION_KEY,
|
||||||
|
value = str(dbVersion))
|
||||||
|
s.add(versionProperty)
|
||||||
|
s.commit()
|
||||||
|
|
||||||
|
except Exception as ex:
|
||||||
|
raise click.ClickException(f"setDatabaseVersion(): {repr(ex)}")
|
||||||
|
finally:
|
||||||
|
s.close()
|
@ -0,0 +1,740 @@
|
|||||||
|
#! /usr/bin/python3
|
||||||
|
|
||||||
|
import os, click, time, logging
|
||||||
|
|
||||||
|
from ffx.configuration_controller import ConfigurationController
|
||||||
|
|
||||||
|
from ffx.file_properties import FileProperties
|
||||||
|
|
||||||
|
from ffx.ffx_app import FfxApp
|
||||||
|
from ffx.ffx_controller import FfxController
|
||||||
|
from ffx.tmdb_controller import TmdbController
|
||||||
|
|
||||||
|
from ffx.database import databaseContext
|
||||||
|
|
||||||
|
from ffx.media_descriptor import MediaDescriptor
|
||||||
|
from ffx.track_descriptor import TrackDescriptor
|
||||||
|
from ffx.show_descriptor import ShowDescriptor
|
||||||
|
|
||||||
|
from ffx.track_type import TrackType
|
||||||
|
from ffx.video_encoder import VideoEncoder
|
||||||
|
from ffx.track_disposition import TrackDisposition
|
||||||
|
from ffx.track_codec import TrackCodec
|
||||||
|
|
||||||
|
from ffx.process import executeProcess
|
||||||
|
from ffx.helper import filterFilename, substituteTmdbFilename
|
||||||
|
from ffx.helper import getEpisodeFileBasename
|
||||||
|
|
||||||
|
from ffx.constants import DEFAULT_STEREO_BANDWIDTH, DEFAULT_AC3_BANDWIDTH, DEFAULT_DTS_BANDWIDTH, DEFAULT_7_1_BANDWIDTH
|
||||||
|
|
||||||
|
from ffx.filter.quality_filter import QualityFilter
|
||||||
|
from ffx.filter.preset_filter import PresetFilter
|
||||||
|
|
||||||
|
from ffx.filter.nlmeans_filter import NlmeansFilter
|
||||||
|
|
||||||
|
from ffx.constants import VERSION
|
||||||
|
|
||||||
|
from ffx.shifted_season_controller import ShiftedSeasonController
|
||||||
|
|
||||||
|
|
||||||
|
@click.group()
|
||||||
|
@click.pass_context
|
||||||
|
@click.option('--database-file', type=str, default='', help='Path to database file')
|
||||||
|
@click.option('-v', '--verbose', type=int, default=0, help='Set verbosity of output')
|
||||||
|
@click.option("--dry-run", is_flag=True, default=False)
|
||||||
|
def ffx(ctx, database_file, verbose, dry_run):
|
||||||
|
"""FFX"""
|
||||||
|
|
||||||
|
ctx.obj = {}
|
||||||
|
|
||||||
|
ctx.obj['config'] = ConfigurationController()
|
||||||
|
|
||||||
|
ctx.obj['database'] = databaseContext(databasePath=database_file
|
||||||
|
if database_file else ctx.obj['config'].getDatabaseFilePath())
|
||||||
|
|
||||||
|
ctx.obj['dry_run'] = dry_run
|
||||||
|
ctx.obj['verbosity'] = verbose
|
||||||
|
|
||||||
|
# Critical 50
|
||||||
|
# Error 40
|
||||||
|
# Warning 30
|
||||||
|
# Info 20
|
||||||
|
# Debug 10
|
||||||
|
fileLogVerbosity = max(40 - verbose * 10, 10)
|
||||||
|
consoleLogVerbosity = max(20 - verbose * 10, 10)
|
||||||
|
|
||||||
|
ctx.obj['logger'] = logging.getLogger('FFX')
|
||||||
|
ctx.obj['logger'].setLevel(logging.DEBUG)
|
||||||
|
|
||||||
|
ffxFileHandler = logging.FileHandler(ctx.obj['config'].getLogFilePath())
|
||||||
|
ffxFileHandler.setLevel(fileLogVerbosity)
|
||||||
|
ffxConsoleHandler = logging.StreamHandler()
|
||||||
|
ffxConsoleHandler.setLevel(consoleLogVerbosity)
|
||||||
|
|
||||||
|
fileFormatter = logging.Formatter(
|
||||||
|
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
||||||
|
ffxFileHandler.setFormatter(fileFormatter)
|
||||||
|
consoleFormatter = logging.Formatter(
|
||||||
|
'%(message)s')
|
||||||
|
ffxConsoleHandler.setFormatter(consoleFormatter)
|
||||||
|
|
||||||
|
ctx.obj['logger'].addHandler(ffxConsoleHandler)
|
||||||
|
ctx.obj['logger'].addHandler(ffxFileHandler)
|
||||||
|
|
||||||
|
|
||||||
|
# Define a subcommand
|
||||||
|
@ffx.command()
|
||||||
|
def version():
|
||||||
|
click.echo(VERSION)
|
||||||
|
|
||||||
|
|
||||||
|
# Another subcommand
|
||||||
|
@ffx.command()
|
||||||
|
def help():
|
||||||
|
click.echo(f"ffx {VERSION}\n")
|
||||||
|
click.echo(f"Usage: ffx [input file] [output file] [vp9|av1] [q=[nn[,nn,...]]] [p=nn] [a=nnn[k]] [ac3=nnn[k]] [dts=nnn[k]] [crop]")
|
||||||
|
|
||||||
|
|
||||||
|
@ffx.command()
|
||||||
|
@click.pass_context
|
||||||
|
@click.argument('filename', nargs=1)
|
||||||
|
def inspect(ctx, filename):
|
||||||
|
|
||||||
|
ctx.obj['command'] = 'inspect'
|
||||||
|
ctx.obj['arguments'] = {}
|
||||||
|
ctx.obj['arguments']['filename'] = filename
|
||||||
|
|
||||||
|
app = FfxApp(ctx.obj)
|
||||||
|
app.run()
|
||||||
|
|
||||||
|
|
||||||
|
def getUnmuxSequence(trackDescriptor: TrackDescriptor, sourcePath, targetPrefix, targetDirectory = ''):
|
||||||
|
|
||||||
|
# executable and input file
|
||||||
|
commandTokens = FfxController.COMMAND_TOKENS + ['-i', sourcePath]
|
||||||
|
|
||||||
|
trackType = trackDescriptor.getType()
|
||||||
|
|
||||||
|
targetPathBase = os.path.join(targetDirectory, targetPrefix) if targetDirectory else targetPrefix
|
||||||
|
|
||||||
|
# mapping
|
||||||
|
commandTokens += ['-map',
|
||||||
|
f"0:{trackType.indicator()}:{trackDescriptor.getSubIndex()}",
|
||||||
|
'-c',
|
||||||
|
'copy']
|
||||||
|
|
||||||
|
trackCodec = trackDescriptor.getCodec()
|
||||||
|
|
||||||
|
# output format
|
||||||
|
codecFormat = trackCodec.format()
|
||||||
|
if codecFormat is not None:
|
||||||
|
commandTokens += ['-f', codecFormat]
|
||||||
|
|
||||||
|
# output filename
|
||||||
|
commandTokens += [f"{targetPathBase}.{trackCodec.extension()}"]
|
||||||
|
|
||||||
|
return commandTokens
|
||||||
|
|
||||||
|
|
||||||
|
@ffx.command()
|
||||||
|
@click.pass_context
|
||||||
|
|
||||||
|
@click.argument('paths', nargs=-1)
|
||||||
|
@click.option('-l', '--label', type=str, default='', help='Label to be used as filename prefix')
|
||||||
|
@click.option("-o", "--output-directory", type=str, default='')
|
||||||
|
@click.option("-s", "--subtitles-only", is_flag=True, default=False)
|
||||||
|
@click.option('--nice', type=int, default=99, help='Niceness of started processes')
|
||||||
|
@click.option('--cpu', type=int, default=0, help='Limit CPU for started processes to percent')
|
||||||
|
def unmux(ctx,
|
||||||
|
paths,
|
||||||
|
label,
|
||||||
|
output_directory,
|
||||||
|
subtitles_only,
|
||||||
|
nice,
|
||||||
|
cpu):
|
||||||
|
|
||||||
|
existingSourcePaths = [p for p in paths if os.path.isfile(p)]
|
||||||
|
ctx.obj['logger'].debug(f"\nUnmuxing {len(existingSourcePaths)} files")
|
||||||
|
|
||||||
|
ctx.obj['resource_limits'] = {}
|
||||||
|
ctx.obj['resource_limits']['niceness'] = nice
|
||||||
|
ctx.obj['resource_limits']['cpu_percent'] = cpu
|
||||||
|
|
||||||
|
for sourcePath in existingSourcePaths:
|
||||||
|
|
||||||
|
fp = FileProperties(ctx.obj, sourcePath)
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
sourceMediaDescriptor = fp.getMediaDescriptor()
|
||||||
|
|
||||||
|
season = fp.getSeason()
|
||||||
|
episode = fp.getEpisode()
|
||||||
|
|
||||||
|
#TODO: Recognition für alle Formate anpassen
|
||||||
|
targetLabel = label if label else fp.getFileBasename()
|
||||||
|
targetIndicator = f"_S{season}E{episode}" if label and season != -1 and episode != -1 else ''
|
||||||
|
|
||||||
|
if label and not targetIndicator:
|
||||||
|
ctx.obj['logger'].warning(f"Skipping file {fp.getFilename()}: Label set but no indicator recognized")
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
ctx.obj['logger'].info(f"\nUnmuxing file {fp.getFilename()}\n")
|
||||||
|
|
||||||
|
for trackDescriptor in sourceMediaDescriptor.getAllTrackDescriptors():
|
||||||
|
|
||||||
|
if trackDescriptor.getType() == TrackType.SUBTITLE or not subtitles_only:
|
||||||
|
|
||||||
|
# SEASON_EPISODE_STREAM_LANGUAGE_MATCH = '[sS]([0-9]+)[eE]([0-9]+)_([0-9]+)_([a-z]{3})(?:_([A-Z]{3}))*'
|
||||||
|
targetPrefix = f"{targetLabel}{targetIndicator}_{trackDescriptor.getIndex()}_{trackDescriptor.getLanguage().threeLetter()}"
|
||||||
|
|
||||||
|
td: TrackDisposition
|
||||||
|
for td in sorted(trackDescriptor.getDispositionSet(), key=lambda d: d.index()):
|
||||||
|
targetPrefix += f"_{td.indicator()}"
|
||||||
|
|
||||||
|
unmuxSequence = getUnmuxSequence(trackDescriptor, sourcePath, targetPrefix, targetDirectory = output_directory)
|
||||||
|
|
||||||
|
if unmuxSequence:
|
||||||
|
if not ctx.obj['dry_run']:
|
||||||
|
|
||||||
|
#TODO #425: Codec Enum
|
||||||
|
ctx.obj['logger'].info(f"Unmuxing stream {trackDescriptor.getIndex()} into file {targetPrefix}.{trackDescriptor.getCodec().extension()}")
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"Executing unmuxing sequence")
|
||||||
|
|
||||||
|
out, err, rc = executeProcess(unmuxSequence, context = ctx.obj)
|
||||||
|
if rc:
|
||||||
|
ctx.obj['logger'].error(f"Unmuxing of stream {trackDescriptor.getIndex()} failed with error ({rc}) {err}")
|
||||||
|
else:
|
||||||
|
ctx.obj['logger'].warning(f"Skipping stream with unknown codec")
|
||||||
|
except Exception as ex:
|
||||||
|
ctx.obj['logger'].warning(f"Skipping File {sourcePath} ({ex})")
|
||||||
|
|
||||||
|
|
||||||
|
@ffx.command()
|
||||||
|
@click.pass_context
|
||||||
|
|
||||||
|
def shows(ctx):
|
||||||
|
|
||||||
|
ctx.obj['command'] = 'shows'
|
||||||
|
|
||||||
|
app = FfxApp(ctx.obj)
|
||||||
|
app.run()
|
||||||
|
|
||||||
|
|
||||||
|
def checkUniqueDispositions(context, mediaDescriptor: MediaDescriptor):
|
||||||
|
|
||||||
|
# Check for multiple default or forced dispositions if not set by user input or database requirements
|
||||||
|
#
|
||||||
|
# Query user for the correct sub indices, then configure flags in track descriptors associated with media descriptor accordingly.
|
||||||
|
# The correct tokens should then be created by
|
||||||
|
if len([v for v in mediaDescriptor.getVideoTracks() if v.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
||||||
|
if context['no_prompt']:
|
||||||
|
raise click.ClickException('More than one default video stream detected and no prompt set')
|
||||||
|
defaultVideoTrackSubIndex = click.prompt("More than one default video stream detected! Please select stream", type=int)
|
||||||
|
mediaDescriptor.setDefaultSubTrack(TrackType.VIDEO, defaultVideoTrackSubIndex)
|
||||||
|
|
||||||
|
if len([v for v in mediaDescriptor.getVideoTracks() if v.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
||||||
|
if context['no_prompt']:
|
||||||
|
raise click.ClickException('More than one forced video stream detected and no prompt set')
|
||||||
|
forcedVideoTrackSubIndex = click.prompt("More than one forced video stream detected! Please select stream", type=int)
|
||||||
|
mediaDescriptor.setForcedSubTrack(TrackType.VIDEO, forcedVideoTrackSubIndex)
|
||||||
|
|
||||||
|
if len([a for a in mediaDescriptor.getAudioTracks() if a.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
||||||
|
if context['no_prompt']:
|
||||||
|
raise click.ClickException('More than one default audio stream detected and no prompt set')
|
||||||
|
defaultAudioTrackSubIndex = click.prompt("More than one default audio stream detected! Please select stream", type=int)
|
||||||
|
mediaDescriptor.setDefaultSubTrack(TrackType.AUDIO, defaultAudioTrackSubIndex)
|
||||||
|
|
||||||
|
if len([a for a in mediaDescriptor.getAudioTracks() if a.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
||||||
|
if context['no_prompt']:
|
||||||
|
raise click.ClickException('More than one forced audio stream detected and no prompt set')
|
||||||
|
forcedAudioTrackSubIndex = click.prompt("More than one forced audio stream detected! Please select stream", type=int)
|
||||||
|
mediaDescriptor.setForcedSubTrack(TrackType.AUDIO, forcedAudioTrackSubIndex)
|
||||||
|
|
||||||
|
if len([s for s in mediaDescriptor.getSubtitleTracks() if s.getDispositionFlag(TrackDisposition.DEFAULT)]) > 1:
|
||||||
|
if context['no_prompt']:
|
||||||
|
raise click.ClickException('More than one default subtitle stream detected and no prompt set')
|
||||||
|
defaultSubtitleTrackSubIndex = click.prompt("More than one default subtitle stream detected! Please select stream", type=int)
|
||||||
|
mediaDescriptor.setDefaultSubTrack(TrackType.SUBTITLE, defaultSubtitleTrackSubIndex)
|
||||||
|
|
||||||
|
if len([s for s in mediaDescriptor.getSubtitleTracks() if s.getDispositionFlag(TrackDisposition.FORCED)]) > 1:
|
||||||
|
if context['no_prompt']:
|
||||||
|
raise click.ClickException('More than one forced subtitle stream detected and no prompt set')
|
||||||
|
forcedSubtitleTrackSubIndex = click.prompt("More than one forced subtitle stream detected! Please select stream", type=int)
|
||||||
|
mediaDescriptor.setForcedSubTrack(TrackType.SUBTITLE, forcedSubtitleTrackSubIndex)
|
||||||
|
|
||||||
|
|
||||||
|
@ffx.command()
|
||||||
|
@click.pass_context
|
||||||
|
|
||||||
|
@click.argument('paths', nargs=-1)
|
||||||
|
|
||||||
|
@click.option('-l', '--label', type=str, default='', help='Label to be used as filename prefix')
|
||||||
|
|
||||||
|
@click.option('-v', '--video-encoder', type=str, default=FfxController.DEFAULT_VIDEO_ENCODER, help=f"Target video encoder (vp9 or av1)", show_default=True)
|
||||||
|
|
||||||
|
@click.option('-q', '--quality', type=str, default="", help=f"Quality settings to be used with VP9 encoder")
|
||||||
|
@click.option('-p', '--preset', type=str, default="", help=f"Quality preset to be used with AV1 encoder")
|
||||||
|
|
||||||
|
@click.option('-a', '--stereo-bitrate', type=int, default=DEFAULT_STEREO_BANDWIDTH, help=f"Bitrate in kbit/s to be used to encode stereo audio streams", show_default=True)
|
||||||
|
@click.option('--ac3', type=int, default=DEFAULT_AC3_BANDWIDTH, help=f"Bitrate in kbit/s to be used to encode 5.1 audio streams", show_default=True)
|
||||||
|
@click.option('--dts', type=int, default=DEFAULT_DTS_BANDWIDTH, help=f"Bitrate in kbit/s to be used to encode 6.1 audio streams", show_default=True)
|
||||||
|
|
||||||
|
@click.option('--subtitle-directory', type=str, default='', help='Load subtitles from here')
|
||||||
|
@click.option('--subtitle-prefix', type=str, default='', help='Subtitle filename prefix')
|
||||||
|
|
||||||
|
@click.option('--language', type=str, multiple=True, help='Set stream language. Use format <stream index>:<3 letter iso code>')
|
||||||
|
@click.option('--title', type=str, multiple=True, help='Set stream title. Use format <stream index>:<title>')
|
||||||
|
|
||||||
|
@click.option('--default-video', type=int, default=-1, help='Index of default video stream')
|
||||||
|
@click.option('--forced-video', type=int, default=-1, help='Index of forced video stream')
|
||||||
|
@click.option('--default-audio', type=int, default=-1, help='Index of default audio stream')
|
||||||
|
@click.option('--forced-audio', type=int, default=-1, help='Index of forced audio stream')
|
||||||
|
@click.option('--default-subtitle', type=int, default=-1, help='Index of default subtitle stream')
|
||||||
|
@click.option('--forced-subtitle', type=int, default=-1, help='Index of forced subtitle stream')
|
||||||
|
|
||||||
|
@click.option('--rearrange-streams', type=str, default="", help='Rearrange output streams order. Use format comma separated integers')
|
||||||
|
|
||||||
|
@click.option("--crop", is_flag=False, flag_value="default", default="none")
|
||||||
|
|
||||||
|
@click.option("--output-directory", type=str, default='')
|
||||||
|
|
||||||
|
@click.option("--denoise", is_flag=False, flag_value="default", default="none")
|
||||||
|
@click.option("--denoise-use-hw", is_flag=True, default=False)
|
||||||
|
@click.option('--denoise-strength', type=str, default='', help='Denoising strength, more blurring vs more details.')
|
||||||
|
@click.option('--denoise-patch-size', type=str, default='', help='Subimage size to apply filtering on luminosity plane. Reduces broader noise patterns but costly.')
|
||||||
|
@click.option('--denoise-chroma-patch-size', type=str, default='', help='Subimage size to apply filtering on chroma planes.')
|
||||||
|
@click.option('--denoise-research-window', type=str, default='', help='Range to search for comparable patches on luminosity plane. Better filtering but costly.')
|
||||||
|
@click.option('--denoise-chroma-research-window', type=str, default='', help='Range to search for comparable patches on chroma planes.')
|
||||||
|
|
||||||
|
@click.option('--show', type=int, default=-1, help='Set TMDB show identifier')
|
||||||
|
@click.option('--season', type=int, default=-1, help='Set season of show')
|
||||||
|
@click.option('--episode', type=int, default=-1, help='Set episode of show')
|
||||||
|
|
||||||
|
@click.option("--no-tmdb", is_flag=True, default=False)
|
||||||
|
@click.option("--no-pattern", is_flag=True, default=False)
|
||||||
|
|
||||||
|
@click.option("--dont-pass-dispositions", is_flag=True, default=False)
|
||||||
|
|
||||||
|
@click.option("--no-prompt", is_flag=True, default=False)
|
||||||
|
@click.option("--no-signature", is_flag=True, default=False)
|
||||||
|
@click.option("--keep-mkvmerge-metadata", is_flag=True, default=False)
|
||||||
|
|
||||||
|
@click.option('--nice', type=int, default=99, help='Niceness of started processes')
|
||||||
|
@click.option('--cpu', type=int, default=0, help='Limit CPU for started processes to percent')
|
||||||
|
|
||||||
|
def convert(ctx,
|
||||||
|
paths,
|
||||||
|
label,
|
||||||
|
video_encoder,
|
||||||
|
quality,
|
||||||
|
preset,
|
||||||
|
stereo_bitrate,
|
||||||
|
ac3,
|
||||||
|
dts,
|
||||||
|
|
||||||
|
subtitle_directory,
|
||||||
|
subtitle_prefix,
|
||||||
|
|
||||||
|
language,
|
||||||
|
title,
|
||||||
|
|
||||||
|
default_video,
|
||||||
|
forced_video,
|
||||||
|
default_audio,
|
||||||
|
forced_audio,
|
||||||
|
default_subtitle,
|
||||||
|
forced_subtitle,
|
||||||
|
|
||||||
|
rearrange_streams,
|
||||||
|
|
||||||
|
crop,
|
||||||
|
output_directory,
|
||||||
|
|
||||||
|
denoise,
|
||||||
|
denoise_use_hw,
|
||||||
|
denoise_strength,
|
||||||
|
denoise_patch_size,
|
||||||
|
denoise_chroma_patch_size,
|
||||||
|
denoise_research_window,
|
||||||
|
denoise_chroma_research_window,
|
||||||
|
|
||||||
|
show,
|
||||||
|
season,
|
||||||
|
episode,
|
||||||
|
|
||||||
|
no_tmdb,
|
||||||
|
no_pattern,
|
||||||
|
dont_pass_dispositions,
|
||||||
|
no_prompt,
|
||||||
|
no_signature,
|
||||||
|
keep_mkvmerge_metadata,
|
||||||
|
|
||||||
|
nice,
|
||||||
|
cpu):
|
||||||
|
"""Batch conversion of audiovideo files in format suitable for web playback, e.g. jellyfin
|
||||||
|
|
||||||
|
Files found under PATHS will be converted according to parameters.
|
||||||
|
Filename extensions will be changed appropriately.
|
||||||
|
Suffices will we appended to filename in case of multiple created files
|
||||||
|
or if the filename has not changed."""
|
||||||
|
|
||||||
|
startTime = time.perf_counter()
|
||||||
|
|
||||||
|
context = ctx.obj
|
||||||
|
|
||||||
|
context['video_encoder'] = VideoEncoder.fromLabel(video_encoder)
|
||||||
|
|
||||||
|
targetFormat = FfxController.DEFAULT_FILE_FORMAT
|
||||||
|
targetExtension = FfxController.DEFAULT_FILE_EXTENSION
|
||||||
|
|
||||||
|
context['use_tmdb'] = not no_tmdb
|
||||||
|
context['use_pattern'] = not no_pattern
|
||||||
|
context['no_prompt'] = no_prompt
|
||||||
|
context['no_signature'] = no_signature
|
||||||
|
context['keep_mkvmerge_metadata'] = keep_mkvmerge_metadata
|
||||||
|
|
||||||
|
|
||||||
|
context['resource_limits'] = {}
|
||||||
|
context['resource_limits']['niceness'] = nice
|
||||||
|
context['resource_limits']['cpu_percent'] = cpu
|
||||||
|
|
||||||
|
|
||||||
|
context['import_subtitles'] = (subtitle_directory and subtitle_prefix)
|
||||||
|
if context['import_subtitles']:
|
||||||
|
context['subtitle_directory'] = subtitle_directory
|
||||||
|
context['subtitle_prefix'] = subtitle_prefix
|
||||||
|
|
||||||
|
|
||||||
|
existingSourcePaths = [p for p in paths if os.path.isfile(p) and p.split('.')[-1] in FfxController.INPUT_FILE_EXTENSIONS]
|
||||||
|
|
||||||
|
|
||||||
|
# CLI Overrides
|
||||||
|
|
||||||
|
cliOverrides = {}
|
||||||
|
|
||||||
|
if language:
|
||||||
|
cliOverrides['languages'] = {}
|
||||||
|
for overLang in language:
|
||||||
|
olTokens = overLang.split(':')
|
||||||
|
if len(olTokens) == 2:
|
||||||
|
try:
|
||||||
|
cliOverrides['languages'][int(olTokens[0])] = olTokens[1]
|
||||||
|
except ValueError:
|
||||||
|
ctx.obj['logger'].warning(f"Ignoring non-integer language index {olTokens[0]}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
if title:
|
||||||
|
cliOverrides['titles'] = {}
|
||||||
|
for overTitle in title:
|
||||||
|
otTokens = overTitle.split(':')
|
||||||
|
if len(otTokens) == 2:
|
||||||
|
try:
|
||||||
|
cliOverrides['titles'][int(otTokens[0])] = otTokens[1]
|
||||||
|
except ValueError:
|
||||||
|
ctx.obj['logger'].warning(f"Ignoring non-integer title index {otTokens[0]}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
if default_video != -1:
|
||||||
|
cliOverrides['default_video'] = default_video
|
||||||
|
if forced_video != -1:
|
||||||
|
cliOverrides['forced_video'] = forced_video
|
||||||
|
if default_audio != -1:
|
||||||
|
cliOverrides['default_audio'] = default_audio
|
||||||
|
if forced_audio != -1:
|
||||||
|
cliOverrides['forced_audio'] = forced_audio
|
||||||
|
if default_subtitle != -1:
|
||||||
|
cliOverrides['default_subtitle'] = default_subtitle
|
||||||
|
if forced_subtitle != -1:
|
||||||
|
cliOverrides['forced_subtitle'] = forced_subtitle
|
||||||
|
|
||||||
|
if show != -1 or season != -1 or episode != -1:
|
||||||
|
if len(existingSourcePaths) > 1:
|
||||||
|
context['logger'].warning(f"Ignoring TMDB show, season, episode overrides, not supported for multiple source files")
|
||||||
|
else:
|
||||||
|
cliOverrides['tmdb'] = {}
|
||||||
|
if show != -1:
|
||||||
|
cliOverrides['tmdb']['show'] = show
|
||||||
|
if season != -1:
|
||||||
|
cliOverrides['tmdb']['season'] = season
|
||||||
|
if episode != -1:
|
||||||
|
cliOverrides['tmdb']['episode'] = episode
|
||||||
|
|
||||||
|
if cliOverrides:
|
||||||
|
context['overrides'] = cliOverrides
|
||||||
|
|
||||||
|
|
||||||
|
if rearrange_streams:
|
||||||
|
try:
|
||||||
|
cliOverrides['stream_order'] = [int(si) for si in rearrange_streams.split(",")]
|
||||||
|
except ValueError as ve:
|
||||||
|
errorMessage = "Non-integer in rearrange stream parameter"
|
||||||
|
ctx.obj['logger'].error(errorMessage)
|
||||||
|
raise click.Abort()
|
||||||
|
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"\nVideo encoder: {video_encoder}")
|
||||||
|
|
||||||
|
qualityTokens = quality.split(',')
|
||||||
|
q_list = [q for q in qualityTokens if q.isnumeric()]
|
||||||
|
ctx.obj['logger'].debug(f"Qualities: {q_list}")
|
||||||
|
|
||||||
|
presetTokens = preset.split(',')
|
||||||
|
p_list = [p for p in presetTokens if p.isnumeric()]
|
||||||
|
ctx.obj['logger'].debug(f"Presets: {p_list}")
|
||||||
|
|
||||||
|
|
||||||
|
context['bitrates'] = {}
|
||||||
|
context['bitrates']['stereo'] = str(stereo_bitrate) if str(stereo_bitrate).endswith('k') else f"{stereo_bitrate}k"
|
||||||
|
context['bitrates']['ac3'] = str(ac3) if str(ac3).endswith('k') else f"{ac3}k"
|
||||||
|
context['bitrates']['dts'] = str(dts) if str(dts).endswith('k') else f"{dts}k"
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"Stereo bitrate: {context['bitrates']['stereo']}")
|
||||||
|
ctx.obj['logger'].debug(f"AC3 bitrate: {context['bitrates']['ac3']}")
|
||||||
|
ctx.obj['logger'].debug(f"DTS bitrate: {context['bitrates']['dts']}")
|
||||||
|
|
||||||
|
|
||||||
|
# Process crop parameters
|
||||||
|
context['perform_crop'] = (crop != 'none')
|
||||||
|
if context['perform_crop']:
|
||||||
|
cTokens = crop.split(',')
|
||||||
|
if cTokens and len(cTokens) == 2:
|
||||||
|
context['crop_start'] = int(cTokens[0])
|
||||||
|
context['crop_length'] = int(cTokens[1])
|
||||||
|
ctx.obj['logger'].debug(f"Crop start={context['crop_start']} length={context['crop_length']}")
|
||||||
|
|
||||||
|
|
||||||
|
tc = TmdbController() if context['use_tmdb'] else None
|
||||||
|
|
||||||
|
qualityKwargs = {QualityFilter.QUALITY_KEY: quality}
|
||||||
|
qf = QualityFilter(**qualityKwargs)
|
||||||
|
|
||||||
|
if context['video_encoder'] == VideoEncoder.AV1 and preset:
|
||||||
|
presetKwargs = {PresetFilter.PRESET_KEY: preset}
|
||||||
|
PresetFilter(**presetKwargs)
|
||||||
|
|
||||||
|
denoiseKwargs = {}
|
||||||
|
if denoise_strength:
|
||||||
|
denoiseKwargs[NlmeansFilter.STRENGTH_KEY] = denoise_strength
|
||||||
|
if denoise_patch_size:
|
||||||
|
denoiseKwargs[NlmeansFilter.PATCH_SIZE_KEY] = denoise_patch_size
|
||||||
|
if denoise_chroma_patch_size:
|
||||||
|
denoiseKwargs[NlmeansFilter.CHROMA_PATCH_SIZE_KEY] = denoise_chroma_patch_size
|
||||||
|
if denoise_research_window:
|
||||||
|
denoiseKwargs[NlmeansFilter.RESEARCH_WINDOW_KEY] = denoise_research_window
|
||||||
|
if denoise_chroma_research_window:
|
||||||
|
denoiseKwargs[NlmeansFilter.CHROMA_RESEARCH_WINDOW_KEY] = denoise_chroma_research_window
|
||||||
|
if denoise != 'none' or denoiseKwargs:
|
||||||
|
NlmeansFilter(**denoiseKwargs)
|
||||||
|
|
||||||
|
chainYield = list(qf.getChainYield())
|
||||||
|
|
||||||
|
ctx.obj['logger'].info(f"\nRunning {len(existingSourcePaths) * len(chainYield)} jobs")
|
||||||
|
|
||||||
|
jobIndex = 0
|
||||||
|
|
||||||
|
for sourcePath in existingSourcePaths:
|
||||||
|
|
||||||
|
# Separate basedir, basename and extension for current source file
|
||||||
|
sourceDirectory = os.path.dirname(sourcePath)
|
||||||
|
sourceFilename = os.path.basename(sourcePath)
|
||||||
|
sourcePathTokens = sourceFilename.split('.')
|
||||||
|
|
||||||
|
sourceFileBasename = '.'.join(sourcePathTokens[:-1])
|
||||||
|
sourceFilenameExtension = sourcePathTokens[-1]
|
||||||
|
|
||||||
|
ctx.obj['logger'].info(f"\nProcessing file {sourcePath}")
|
||||||
|
|
||||||
|
targetSuffices = {}
|
||||||
|
|
||||||
|
|
||||||
|
mediaFileProperties = FileProperties(context, sourceFilename)
|
||||||
|
|
||||||
|
|
||||||
|
ssc = ShiftedSeasonController(context)
|
||||||
|
|
||||||
|
showId = mediaFileProperties.getShowId()
|
||||||
|
|
||||||
|
#HINT: -1 if not set
|
||||||
|
if 'tmdb' in cliOverrides.keys() and 'season' in cliOverrides['tmdb']:
|
||||||
|
showSeason = cliOverrides['tmdb']['season']
|
||||||
|
else:
|
||||||
|
showSeason = mediaFileProperties.getSeason()
|
||||||
|
|
||||||
|
if 'tmdb' in cliOverrides.keys() and 'episode' in cliOverrides['tmdb']:
|
||||||
|
showEpisode = cliOverrides['tmdb']['episode']
|
||||||
|
else:
|
||||||
|
showEpisode = mediaFileProperties.getEpisode()
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"Season={showSeason} Episode={showEpisode}")
|
||||||
|
|
||||||
|
|
||||||
|
sourceMediaDescriptor = mediaFileProperties.getMediaDescriptor()
|
||||||
|
|
||||||
|
#HINT: This is None if the filename did not match anything in database
|
||||||
|
currentPattern = mediaFileProperties.getPattern() if context['use_pattern'] else None
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"Pattern matching: {'No' if currentPattern is None else 'Yes'}")
|
||||||
|
|
||||||
|
# Setup FfxController accordingly depending on pattern matching is enabled and a pattern was matched
|
||||||
|
if currentPattern is None:
|
||||||
|
|
||||||
|
checkUniqueDispositions(context, sourceMediaDescriptor)
|
||||||
|
currentShowDescriptor = None
|
||||||
|
|
||||||
|
if context['import_subtitles']:
|
||||||
|
sourceMediaDescriptor.importSubtitles(context['subtitle_directory'],
|
||||||
|
context['subtitle_prefix'],
|
||||||
|
showSeason,
|
||||||
|
showEpisode)
|
||||||
|
|
||||||
|
if cliOverrides:
|
||||||
|
sourceMediaDescriptor.applyOverrides(cliOverrides)
|
||||||
|
|
||||||
|
fc = FfxController(context, sourceMediaDescriptor)
|
||||||
|
|
||||||
|
else:
|
||||||
|
targetMediaDescriptor = currentPattern.getMediaDescriptor(ctx.obj)
|
||||||
|
checkUniqueDispositions(context, targetMediaDescriptor)
|
||||||
|
currentShowDescriptor = currentPattern.getShowDescriptor(ctx.obj)
|
||||||
|
|
||||||
|
if context['import_subtitles']:
|
||||||
|
targetMediaDescriptor.importSubtitles(context['subtitle_directory'],
|
||||||
|
context['subtitle_prefix'],
|
||||||
|
showSeason,
|
||||||
|
showEpisode)
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"tmd subindices: {[t.getIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getSubIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getDispositionFlag(TrackDisposition.DEFAULT) for t in targetMediaDescriptor.getAllTrackDescriptors()]}")
|
||||||
|
|
||||||
|
if cliOverrides:
|
||||||
|
targetMediaDescriptor.applyOverrides(cliOverrides)
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"tmd subindices: {[t.getIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getSubIndex() for t in targetMediaDescriptor.getAllTrackDescriptors()]} {[t.getDispositionFlag(TrackDisposition.DEFAULT) for t in targetMediaDescriptor.getAllTrackDescriptors()]}")
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"Input mapping tokens (2nd pass): {targetMediaDescriptor.getInputMappingTokens()}")
|
||||||
|
|
||||||
|
fc = FfxController(context, targetMediaDescriptor, sourceMediaDescriptor)
|
||||||
|
|
||||||
|
|
||||||
|
indexSeasonDigits = currentShowDescriptor.getIndexSeasonDigits() if not currentPattern is None else ShowDescriptor.DEFAULT_INDEX_SEASON_DIGITS
|
||||||
|
indexEpisodeDigits = currentShowDescriptor.getIndexEpisodeDigits() if not currentPattern is None else ShowDescriptor.DEFAULT_INDEX_EPISODE_DIGITS
|
||||||
|
indicatorSeasonDigits = currentShowDescriptor.getIndicatorSeasonDigits() if not currentPattern is None else ShowDescriptor.DEFAULT_INDICATOR_SEASON_DIGITS
|
||||||
|
indicatorEpisodeDigits = currentShowDescriptor.getIndicatorEpisodeDigits() if not currentPattern is None else ShowDescriptor.DEFAULT_INDICATOR_EPISODE_DIGITS
|
||||||
|
|
||||||
|
|
||||||
|
# Shift season and episode if defined for this show
|
||||||
|
if ('tmdb' not in cliOverrides.keys() and showId != -1
|
||||||
|
and showSeason != -1 and showEpisode != -1):
|
||||||
|
shiftedShowSeason, shiftedShowEpisode = ssc.shiftSeason(showId,
|
||||||
|
season=showSeason,
|
||||||
|
episode=showEpisode)
|
||||||
|
else:
|
||||||
|
shiftedShowSeason = showSeason
|
||||||
|
shiftedShowEpisode = showEpisode
|
||||||
|
|
||||||
|
# Assemble target filename accordingly depending on TMDB lookup is enabled
|
||||||
|
#HINT: -1 if not set
|
||||||
|
showId = cliOverrides['tmdb']['show'] if 'tmdb' in cliOverrides.keys() and 'show' in cliOverrides['tmdb'] else (-1 if currentShowDescriptor is None else currentShowDescriptor.getId())
|
||||||
|
|
||||||
|
if context['use_tmdb'] and showId != -1 and shiftedShowSeason != -1 and shiftedShowEpisode != -1:
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"Querying TMDB for show_id={showId} season={shiftedShowSeason} episode{shiftedShowEpisode}")
|
||||||
|
|
||||||
|
if currentPattern is None:
|
||||||
|
sName, showYear = tc.getShowNameAndYear(showId)
|
||||||
|
showName = filterFilename(sName)
|
||||||
|
showFilenamePrefix = f"{showName} ({str(showYear)})"
|
||||||
|
else:
|
||||||
|
showFilenamePrefix = currentShowDescriptor.getFilenamePrefix()
|
||||||
|
|
||||||
|
tmdbEpisodeResult = tc.queryEpisode(showId, shiftedShowSeason, shiftedShowEpisode)
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"tmdbEpisodeResult={tmdbEpisodeResult}")
|
||||||
|
|
||||||
|
if tmdbEpisodeResult:
|
||||||
|
substitutedEpisodeName = filterFilename(substituteTmdbFilename(tmdbEpisodeResult['name']))
|
||||||
|
sourceFileBasename = getEpisodeFileBasename(showFilenamePrefix,
|
||||||
|
substitutedEpisodeName,
|
||||||
|
shiftedShowSeason,
|
||||||
|
shiftedShowEpisode,
|
||||||
|
indexSeasonDigits,
|
||||||
|
indexEpisodeDigits,
|
||||||
|
indicatorSeasonDigits,
|
||||||
|
indicatorEpisodeDigits,
|
||||||
|
context=ctx.obj)
|
||||||
|
|
||||||
|
if label:
|
||||||
|
if shiftedShowSeason > -1 and shiftedShowEpisode > -1:
|
||||||
|
targetSuffices['se'] = f"S{shiftedShowSeason:0{indicatorSeasonDigits}d}E{shiftedShowEpisode:0{indicatorEpisodeDigits}d}"
|
||||||
|
elif shiftedShowEpisode > -1:
|
||||||
|
targetSuffices['se'] = f"E{shiftedShowEpisode:0{indicatorEpisodeDigits}d}"
|
||||||
|
else:
|
||||||
|
if 'se' in targetSuffices.keys():
|
||||||
|
del targetSuffices['se']
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"fileBasename={sourceFileBasename}")
|
||||||
|
|
||||||
|
|
||||||
|
for chainIteration in chainYield:
|
||||||
|
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"\nchain iteration: {chainIteration}\n")
|
||||||
|
|
||||||
|
# if len(q_list) > 1:
|
||||||
|
# targetSuffices['q'] = f"q{q}"
|
||||||
|
|
||||||
|
chainVariant = '-'.join([fy['variant'] for fy in chainIteration])
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"\nRunning job {jobIndex} file={sourcePath} variant={chainVariant}")
|
||||||
|
jobIndex += 1
|
||||||
|
|
||||||
|
ctx.obj['logger'].debug(f"label={label if label else 'Falsy'}")
|
||||||
|
ctx.obj['logger'].debug(f"sourceFileBasename={sourceFileBasename}")
|
||||||
|
|
||||||
|
# targetFileBasename = mediaFileProperties.assembleTargetFileBasename(label,
|
||||||
|
# q if len(q_list) > 1 else -1,
|
||||||
|
#
|
||||||
|
targetFileBasename = sourceFileBasename if context['use_tmdb'] and not label else label
|
||||||
|
|
||||||
|
|
||||||
|
targetFilenameTokens = [targetFileBasename]
|
||||||
|
|
||||||
|
if 'se' in targetSuffices.keys():
|
||||||
|
targetFilenameTokens += [targetSuffices['se']]
|
||||||
|
|
||||||
|
# if 'q' in targetSuffices.keys():
|
||||||
|
# targetFilenameTokens += [targetSuffices['q']]
|
||||||
|
for filterYield in chainIteration:
|
||||||
|
|
||||||
|
# filterIdentifier = filterYield['identifier']
|
||||||
|
# filterParameters = filterYield['parameters']
|
||||||
|
# filterSuffices = filterYield['suffices']
|
||||||
|
|
||||||
|
targetFilenameTokens += filterYield['suffices']
|
||||||
|
|
||||||
|
#TODO #387
|
||||||
|
# targetFilename = ((f"{sourceFileBasename}_q{q}" if len(q_list) > 1 else sourceFileBasename)
|
||||||
|
# if context['use_tmdb'] else targetFileBasename)
|
||||||
|
|
||||||
|
targetFilename = f"{'_'.join(targetFilenameTokens)}.{targetExtension}"
|
||||||
|
|
||||||
|
targetPath = os.path.join(output_directory if output_directory else sourceDirectory, targetFilename)
|
||||||
|
|
||||||
|
#TODO: target extension anpassen
|
||||||
|
ctx.obj['logger'].info(f"Creating file {targetFilename}")
|
||||||
|
|
||||||
|
fc.runJob(sourcePath,
|
||||||
|
targetPath,
|
||||||
|
targetFormat,
|
||||||
|
context['video_encoder'],
|
||||||
|
chainIteration)
|
||||||
|
|
||||||
|
#TODO: click.confirm('Warning! This file is not compliant to the defined source schema! Do you want to continue?', abort=True)
|
||||||
|
|
||||||
|
endTime = time.perf_counter()
|
||||||
|
ctx.obj['logger'].info(f"\nDONE\nTime elapsed {endTime - startTime}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
ffx()
|
@ -0,0 +1,201 @@
|
|||||||
|
import os, re, json
|
||||||
|
|
||||||
|
from .media_descriptor import MediaDescriptor
|
||||||
|
from .pattern_controller import PatternController
|
||||||
|
|
||||||
|
from .process import executeProcess
|
||||||
|
|
||||||
|
from ffx.model.pattern import Pattern
|
||||||
|
|
||||||
|
|
||||||
|
class FileProperties():
|
||||||
|
|
||||||
|
FILE_EXTENSIONS = ['mkv', 'mp4', 'avi', 'flv', 'webm']
|
||||||
|
|
||||||
|
SE_INDICATOR_PATTERN = '([sS][0-9]+[eE][0-9]+)'
|
||||||
|
SEASON_EPISODE_INDICATOR_MATCH = '[sS]([0-9]+)[eE]([0-9]+)'
|
||||||
|
EPISODE_INDICATOR_MATCH = '[eE]([0-9]+)'
|
||||||
|
|
||||||
|
DEFAULT_INDEX_DIGITS = 3
|
||||||
|
|
||||||
|
def __init__(self, context, sourcePath):
|
||||||
|
|
||||||
|
self.context = context
|
||||||
|
|
||||||
|
self.__logger = context['logger']
|
||||||
|
|
||||||
|
# Separate basedir, basename and extension for current source file
|
||||||
|
self.__sourcePath = sourcePath
|
||||||
|
|
||||||
|
self.__sourceDirectory = os.path.dirname(self.__sourcePath)
|
||||||
|
self.__sourceFilename = os.path.basename(self.__sourcePath)
|
||||||
|
|
||||||
|
sourcePathTokens = self.__sourceFilename.split('.')
|
||||||
|
|
||||||
|
if sourcePathTokens[-1] in FileProperties.FILE_EXTENSIONS:
|
||||||
|
self.__sourceFileBasename = '.'.join(sourcePathTokens[:-1])
|
||||||
|
self.__sourceFilenameExtension = sourcePathTokens[-1]
|
||||||
|
else:
|
||||||
|
self.__sourceFileBasename = self.__sourceFilename
|
||||||
|
self.__sourceFilenameExtension = ''
|
||||||
|
|
||||||
|
self.__pc = PatternController(context)
|
||||||
|
|
||||||
|
# Checking if database contains matching pattern
|
||||||
|
matchResult = self.__pc.matchFilename(self.__sourceFilename)
|
||||||
|
|
||||||
|
self.__logger.debug(f"FileProperties.__init__(): Match result: {matchResult}")
|
||||||
|
|
||||||
|
self.__pattern: Pattern = matchResult['pattern'] if matchResult else None
|
||||||
|
|
||||||
|
if matchResult:
|
||||||
|
databaseMatchedGroups = matchResult['match'].groups()
|
||||||
|
self.__logger.debug(f"FileProperties.__init__(): Matched groups: {databaseMatchedGroups}")
|
||||||
|
|
||||||
|
seIndicator = databaseMatchedGroups[0]
|
||||||
|
|
||||||
|
se_match = re.search(FileProperties.SEASON_EPISODE_INDICATOR_MATCH, seIndicator)
|
||||||
|
e_match = re.search(FileProperties.EPISODE_INDICATOR_MATCH, seIndicator)
|
||||||
|
|
||||||
|
else:
|
||||||
|
self.__logger.debug(f"FileProperties.__init__(): Checking file name for indicator {self.__sourceFilename}")
|
||||||
|
|
||||||
|
se_match = re.search(FileProperties.SEASON_EPISODE_INDICATOR_MATCH, self.__sourceFilename)
|
||||||
|
e_match = re.search(FileProperties.EPISODE_INDICATOR_MATCH, self.__sourceFilename)
|
||||||
|
|
||||||
|
if se_match is not None:
|
||||||
|
self.__season = int(se_match.group(1))
|
||||||
|
self.__episode = int(se_match.group(2))
|
||||||
|
elif e_match is not None:
|
||||||
|
self.__season = -1
|
||||||
|
self.__episode = int(e_match.group(1))
|
||||||
|
else:
|
||||||
|
self.__season = -1
|
||||||
|
self.__episode = -1
|
||||||
|
|
||||||
|
|
||||||
|
def getFormatData(self):
|
||||||
|
"""
|
||||||
|
"format": {
|
||||||
|
"filename": "Downloads/nagatoro_s02/nagatoro_s01e02.mkv",
|
||||||
|
"nb_streams": 18,
|
||||||
|
"nb_programs": 0,
|
||||||
|
"nb_stream_groups": 0,
|
||||||
|
"format_name": "matroska,webm",
|
||||||
|
"format_long_name": "Matroska / WebM",
|
||||||
|
"start_time": "0.000000",
|
||||||
|
"duration": "1420.063000",
|
||||||
|
"size": "1489169824",
|
||||||
|
"bit_rate": "8389316",
|
||||||
|
"probe_score": 100,
|
||||||
|
"tags": {
|
||||||
|
"PUBLISHER": "Crunchyroll",
|
||||||
|
"ENCODER": "Lavf58.29.100"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
# ffprobe -hide_banner -show_format -of json
|
||||||
|
ffprobeOutput, ffprobeError, returnCode = executeProcess(["ffprobe",
|
||||||
|
"-hide_banner",
|
||||||
|
"-show_format",
|
||||||
|
"-of", "json",
|
||||||
|
self.__sourcePath]) #,
|
||||||
|
#context = self.context)
|
||||||
|
|
||||||
|
if 'Invalid data found when processing input' in ffprobeError:
|
||||||
|
raise Exception(f"File {self.__sourcePath} does not contain valid stream data")
|
||||||
|
|
||||||
|
if returnCode != 0:
|
||||||
|
raise Exception(f"ffprobe returned with error {returnCode}")
|
||||||
|
|
||||||
|
return json.loads(ffprobeOutput)['format']
|
||||||
|
|
||||||
|
|
||||||
|
def getStreamData(self):
|
||||||
|
"""Returns ffprobe stream data as array with elements according to the following example
|
||||||
|
{
|
||||||
|
"index": 4,
|
||||||
|
"codec_name": "hdmv_pgs_subtitle",
|
||||||
|
"codec_long_name": "HDMV Presentation Graphic Stream subtitles",
|
||||||
|
"codec_type": "subtitle",
|
||||||
|
"codec_tag_string": "[0][0][0][0]",
|
||||||
|
"codec_tag": "0x0000",
|
||||||
|
"r_frame_rate": "0/0",
|
||||||
|
"avg_frame_rate": "0/0",
|
||||||
|
"time_base": "1/1000",
|
||||||
|
"start_pts": 0,
|
||||||
|
"start_time": "0.000000",
|
||||||
|
"duration_ts": 1421035,
|
||||||
|
"duration": "1421.035000",
|
||||||
|
"disposition": {
|
||||||
|
"default": 1,
|
||||||
|
"dub": 0,
|
||||||
|
"original": 0,
|
||||||
|
"comment": 0,
|
||||||
|
"lyrics": 0,
|
||||||
|
"karaoke": 0,
|
||||||
|
"forced": 0,
|
||||||
|
"hearing_impaired": 0,
|
||||||
|
"visual_impaired": 0,
|
||||||
|
"clean_effects": 0,
|
||||||
|
"attached_pic": 0,
|
||||||
|
"timed_thumbnails": 0,
|
||||||
|
"non_diegetic": 0,
|
||||||
|
"captions": 0,
|
||||||
|
"descriptions": 0,
|
||||||
|
"metadata": 0,
|
||||||
|
"dependent": 0,
|
||||||
|
"still_image": 0
|
||||||
|
},
|
||||||
|
"tags": {
|
||||||
|
"language": "ger",
|
||||||
|
"title": "German Full"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
# ffprobe -hide_banner -show_streams -of json
|
||||||
|
ffprobeOutput, ffprobeError, returnCode = executeProcess(["ffprobe",
|
||||||
|
"-hide_banner",
|
||||||
|
"-show_streams",
|
||||||
|
"-of", "json",
|
||||||
|
self.__sourcePath]) #,
|
||||||
|
#context = self.context)
|
||||||
|
|
||||||
|
if 'Invalid data found when processing input' in ffprobeError:
|
||||||
|
raise Exception(f"File {self.__sourcePath} does not contain valid stream data")
|
||||||
|
|
||||||
|
|
||||||
|
if returnCode != 0:
|
||||||
|
raise Exception(f"ffprobe returned with error {returnCode}")
|
||||||
|
|
||||||
|
|
||||||
|
return json.loads(ffprobeOutput)['streams']
|
||||||
|
|
||||||
|
|
||||||
|
def getMediaDescriptor(self):
|
||||||
|
return MediaDescriptor.fromFfprobe(self.context, self.getFormatData(), self.getStreamData())
|
||||||
|
|
||||||
|
|
||||||
|
def getShowId(self) -> int:
|
||||||
|
"""Result is -1 if the filename did not match anything in database"""
|
||||||
|
return self.__pattern.getShowId() if self.__pattern is not None else -1
|
||||||
|
|
||||||
|
def getPattern(self) -> Pattern:
|
||||||
|
"""Result is None if the filename did not match anything in database"""
|
||||||
|
return self.__pattern
|
||||||
|
|
||||||
|
|
||||||
|
def getSeason(self) -> int:
|
||||||
|
return int(self.__season)
|
||||||
|
|
||||||
|
def getEpisode(self) -> int:
|
||||||
|
return int(self.__episode)
|
||||||
|
|
||||||
|
|
||||||
|
def getFilename(self):
|
||||||
|
return self.__sourceFilename
|
||||||
|
|
||||||
|
def getFileBasename(self):
|
||||||
|
return self.__sourceFileBasename
|
@ -0,0 +1,17 @@
|
|||||||
|
import itertools
|
||||||
|
|
||||||
|
|
||||||
|
class Filter():
|
||||||
|
|
||||||
|
filterChain: list = []
|
||||||
|
|
||||||
|
def __init__(self, filter):
|
||||||
|
|
||||||
|
self.filterChain.append(filter)
|
||||||
|
|
||||||
|
def getFilterChain(self):
|
||||||
|
return self.filterChain
|
||||||
|
|
||||||
|
def getChainYield(self):
|
||||||
|
for fy in itertools.product(*[f.getYield() for f in self.filterChain]):
|
||||||
|
yield fy
|
@ -0,0 +1,162 @@
|
|||||||
|
import itertools
|
||||||
|
|
||||||
|
from .filter import Filter
|
||||||
|
|
||||||
|
|
||||||
|
class NlmeansFilter(Filter):
|
||||||
|
|
||||||
|
IDENTIFIER = 'nlmeans'
|
||||||
|
|
||||||
|
DEFAULT_STRENGTH: float = 2.8
|
||||||
|
DEFAULT_PATCH_SIZE: int = 13
|
||||||
|
DEFAULT_CHROMA_PATCH_SIZE: int = 9
|
||||||
|
DEFAULT_RESEARCH_WINDOW: int = 23
|
||||||
|
DEFAULT_CHROMA_RESEARCH_WINDOW: int= 17
|
||||||
|
|
||||||
|
STRENGTH_KEY = 'strength'
|
||||||
|
PATCH_SIZE_KEY = 'patch_size'
|
||||||
|
CHROMA_PATCH_SIZE_KEY = 'chroma_patch_size'
|
||||||
|
RESEARCH_WINDOW_KEY = 'research_window'
|
||||||
|
CHROMA_RESEARCH_WINDOW_KEY = 'chroma_research_window'
|
||||||
|
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
|
||||||
|
self.__useHardware = kwargs.get('use_hardware', False)
|
||||||
|
|
||||||
|
self.__strengthList = []
|
||||||
|
strength = kwargs.get(NlmeansFilter.STRENGTH_KEY, '')
|
||||||
|
if strength:
|
||||||
|
strengthTokens = strength.split(',')
|
||||||
|
for st in strengthTokens:
|
||||||
|
try:
|
||||||
|
strengthValue = float(st)
|
||||||
|
except:
|
||||||
|
raise ValueError('NlmeansFilter: Strength value has to be of type float')
|
||||||
|
if strengthValue < 1.0 or strengthValue > 30.0:
|
||||||
|
raise ValueError('NlmeansFilter: Strength value has to be between 1.0 and 30.0')
|
||||||
|
self.__strengthList.append(strengthValue)
|
||||||
|
else:
|
||||||
|
self.__strengthList = [NlmeansFilter.DEFAULT_STRENGTH]
|
||||||
|
|
||||||
|
self.__patchSizeList = []
|
||||||
|
patchSize = kwargs.get(NlmeansFilter.PATCH_SIZE_KEY, '')
|
||||||
|
if patchSize:
|
||||||
|
patchSizeTokens = patchSize.split(',')
|
||||||
|
for pst in patchSizeTokens:
|
||||||
|
try:
|
||||||
|
patchSizeValue = int(pst)
|
||||||
|
except:
|
||||||
|
raise ValueError('NlmeansFilter: Patch size value has to be of type int')
|
||||||
|
if patchSizeValue < 0 or patchSizeValue > 99:
|
||||||
|
raise ValueError('NlmeansFilter: Patch size value has to be between 0 and 99')
|
||||||
|
if patchSizeValue % 2 == 0:
|
||||||
|
raise ValueError('NlmeansFilter: Patch size value has to an odd number')
|
||||||
|
self.__patchSizeList.append(patchSizeValue)
|
||||||
|
else:
|
||||||
|
self.__patchSizeList = [NlmeansFilter.DEFAULT_PATCH_SIZE]
|
||||||
|
|
||||||
|
self.__chromaPatchSizeList = []
|
||||||
|
chromaPatchSize = kwargs.get(NlmeansFilter.CHROMA_PATCH_SIZE_KEY, '')
|
||||||
|
if chromaPatchSize:
|
||||||
|
chromaPatchSizeTokens = chromaPatchSize.split(',')
|
||||||
|
for cpst in chromaPatchSizeTokens:
|
||||||
|
try:
|
||||||
|
chromaPatchSizeValue = int(pst)
|
||||||
|
except:
|
||||||
|
raise ValueError('NlmeansFilter: Chroma patch size value has to be of type int')
|
||||||
|
if chromaPatchSizeValue < 0 or chromaPatchSizeValue > 99:
|
||||||
|
raise ValueError('NlmeansFilter: Chroma patch value has to be between 0 and 99')
|
||||||
|
if chromaPatchSizeValue % 2 == 0:
|
||||||
|
raise ValueError('NlmeansFilter: Chroma patch value has to an odd number')
|
||||||
|
self.__chromaPatchSizeList.append(chromaPatchSizeValue)
|
||||||
|
else:
|
||||||
|
self.__chromaPatchSizeList = [NlmeansFilter.DEFAULT_CHROMA_PATCH_SIZE]
|
||||||
|
|
||||||
|
self.__researchWindowList = []
|
||||||
|
researchWindow = kwargs.get(NlmeansFilter.RESEARCH_WINDOW_KEY, '')
|
||||||
|
if researchWindow:
|
||||||
|
researchWindowTokens = researchWindow.split(',')
|
||||||
|
for rwt in researchWindowTokens:
|
||||||
|
try:
|
||||||
|
researchWindowValue = int(rwt)
|
||||||
|
except:
|
||||||
|
raise ValueError('NlmeansFilter: Research window value has to be of type int')
|
||||||
|
if researchWindowValue < 0 or researchWindowValue > 99:
|
||||||
|
raise ValueError('NlmeansFilter: Research window value has to be between 0 and 99')
|
||||||
|
if researchWindowValue % 2 == 0:
|
||||||
|
raise ValueError('NlmeansFilter: Research window value has to an odd number')
|
||||||
|
self.__researchWindowList.append(researchWindowValue)
|
||||||
|
else:
|
||||||
|
self.__researchWindowList = [NlmeansFilter.DEFAULT_RESEARCH_WINDOW]
|
||||||
|
|
||||||
|
self.__chromaResearchWindowList = []
|
||||||
|
chromaResearchWindow = kwargs.get(NlmeansFilter.CHROMA_RESEARCH_WINDOW_KEY, '')
|
||||||
|
if chromaResearchWindow:
|
||||||
|
chromaResearchWindowTokens = chromaResearchWindow.split(',')
|
||||||
|
for crwt in chromaResearchWindowTokens:
|
||||||
|
try:
|
||||||
|
chromaResearchWindowValue = int(crwt)
|
||||||
|
except:
|
||||||
|
raise ValueError('NlmeansFilter: Chroma research window value has to be of type int')
|
||||||
|
if chromaResearchWindowValue < 0 or chromaResearchWindowValue > 99:
|
||||||
|
raise ValueError('NlmeansFilter: Chroma research window value has to be between 0 and 99')
|
||||||
|
if chromaResearchWindowValue % 2 == 0:
|
||||||
|
raise ValueError('NlmeansFilter: Chroma research window value has to an odd number')
|
||||||
|
self.__chromaResearchWindowList.append(chromaResearchWindowValue)
|
||||||
|
else:
|
||||||
|
self.__chromaResearchWindowList = [NlmeansFilter.DEFAULT_CHROMA_RESEARCH_WINDOW]
|
||||||
|
|
||||||
|
super().__init__(self)
|
||||||
|
|
||||||
|
|
||||||
|
def getPayload(self, iteration):
|
||||||
|
|
||||||
|
strength = iteration[0]
|
||||||
|
patchSize = iteration[1]
|
||||||
|
chromaPatchSize = iteration[2]
|
||||||
|
researchWindow = iteration[3]
|
||||||
|
chromaResearchWindow = iteration[4]
|
||||||
|
|
||||||
|
suffices = []
|
||||||
|
|
||||||
|
if len(self.__strengthList) > 1:
|
||||||
|
suffices += [f"ds{strength}"]
|
||||||
|
if len(self.__patchSizeList) > 1:
|
||||||
|
suffices += [f"dp{patchSize}"]
|
||||||
|
if len(self.__chromaPatchSizeList) > 1:
|
||||||
|
suffices += [f"dpc{chromaPatchSize}"]
|
||||||
|
if len(self.__researchWindowList) > 1:
|
||||||
|
suffices += [f"dr{researchWindow}"]
|
||||||
|
if len(self.__chromaResearchWindowList) > 1:
|
||||||
|
suffices += [f"drc{chromaResearchWindow}"]
|
||||||
|
|
||||||
|
filterName = 'nlmeans_opencl' if self.__useHardware else 'nlmeans'
|
||||||
|
|
||||||
|
payload = {'identifier': NlmeansFilter.IDENTIFIER,
|
||||||
|
'parameters': {
|
||||||
|
'strength': strength,
|
||||||
|
'patch_size': patchSize,
|
||||||
|
'chroma_patch_size': chromaPatchSize,
|
||||||
|
'research_window': researchWindow,
|
||||||
|
'chroma_research_window': chromaResearchWindow
|
||||||
|
},
|
||||||
|
'suffices': suffices,
|
||||||
|
'variant': f"DS{strength}-DP{patchSize}-DPC{chromaPatchSize}"
|
||||||
|
+ f"-DR{researchWindow}-DRC{chromaResearchWindow}",
|
||||||
|
'tokens': ['-vf', f"{filterName}=s={strength}"
|
||||||
|
+ f":p={patchSize}"
|
||||||
|
+ f":pc={chromaPatchSize}"
|
||||||
|
+ f":r={researchWindow}"
|
||||||
|
+ f":rc={chromaResearchWindow}"]}
|
||||||
|
|
||||||
|
return payload
|
||||||
|
|
||||||
|
|
||||||
|
def getYield(self):
|
||||||
|
for it in itertools.product(self.__strengthList,
|
||||||
|
self.__patchSizeList,
|
||||||
|
self.__chromaPatchSizeList,
|
||||||
|
self.__researchWindowList,
|
||||||
|
self.__chromaResearchWindowList):
|
||||||
|
yield self.getPayload(it)
|
@ -0,0 +1,54 @@
|
|||||||
|
import itertools
|
||||||
|
|
||||||
|
from .filter import Filter
|
||||||
|
|
||||||
|
|
||||||
|
class PresetFilter(Filter):
|
||||||
|
|
||||||
|
IDENTIFIER = 'preset'
|
||||||
|
|
||||||
|
DEFAULT_PRESET = 5
|
||||||
|
|
||||||
|
PRESET_KEY = 'preset'
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
|
||||||
|
self.__presetsList = []
|
||||||
|
presets = str(kwargs.get(PresetFilter.PRESET_KEY, ''))
|
||||||
|
if presets:
|
||||||
|
presetTokens = presets.split(',')
|
||||||
|
for q in presetTokens:
|
||||||
|
try:
|
||||||
|
presetValue = int(q)
|
||||||
|
except:
|
||||||
|
raise ValueError('PresetFilter: Preset value has to be of type int')
|
||||||
|
if presetValue < 0 or presetValue > 13:
|
||||||
|
raise ValueError('PresetFilter: Preset value has to be between 0 and 13')
|
||||||
|
self.__presetsList.append(presetValue)
|
||||||
|
else:
|
||||||
|
self.__presetsList = [PresetFilter.DEFAULT_PRESET]
|
||||||
|
|
||||||
|
super().__init__(self)
|
||||||
|
|
||||||
|
|
||||||
|
def getPayload(self, preset):
|
||||||
|
|
||||||
|
suffices = []
|
||||||
|
|
||||||
|
if len(self.__presetsList) > 1:
|
||||||
|
suffices += [f"p{preset}"]
|
||||||
|
|
||||||
|
payload = {'identifier': PresetFilter.IDENTIFIER,
|
||||||
|
'parameters': {
|
||||||
|
'preset': preset
|
||||||
|
},
|
||||||
|
'suffices': suffices,
|
||||||
|
'variant': f"P{preset}",
|
||||||
|
'tokens': []}
|
||||||
|
|
||||||
|
return payload
|
||||||
|
|
||||||
|
|
||||||
|
def getYield(self):
|
||||||
|
for q in self.__presetsList:
|
||||||
|
yield self.getPayload(q)
|
@ -0,0 +1,54 @@
|
|||||||
|
import itertools
|
||||||
|
|
||||||
|
from .filter import Filter
|
||||||
|
|
||||||
|
|
||||||
|
class QualityFilter(Filter):
|
||||||
|
|
||||||
|
IDENTIFIER = 'quality'
|
||||||
|
|
||||||
|
DEFAULT_QUALITY = 32
|
||||||
|
|
||||||
|
QUALITY_KEY = 'quality'
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
|
||||||
|
self.__qualitiesList = []
|
||||||
|
qualities = kwargs.get(QualityFilter.QUALITY_KEY, '')
|
||||||
|
if qualities:
|
||||||
|
qualityTokens = qualities.split(',')
|
||||||
|
for q in qualityTokens:
|
||||||
|
try:
|
||||||
|
qualityValue = int(q)
|
||||||
|
except:
|
||||||
|
raise ValueError('QualityFilter: Quality value has to be of type int')
|
||||||
|
if qualityValue < 0 or qualityValue > 63:
|
||||||
|
raise ValueError('QualityFilter: Quality value has to be between 0 and 63')
|
||||||
|
self.__qualitiesList.append(qualityValue)
|
||||||
|
else:
|
||||||
|
self.__qualitiesList = [QualityFilter.DEFAULT_QUALITY]
|
||||||
|
|
||||||
|
super().__init__(self)
|
||||||
|
|
||||||
|
|
||||||
|
def getPayload(self, quality):
|
||||||
|
|
||||||
|
suffices = []
|
||||||
|
|
||||||
|
if len(self.__qualitiesList) > 1:
|
||||||
|
suffices += [f"q{quality}"]
|
||||||
|
|
||||||
|
payload = {'identifier': QualityFilter.IDENTIFIER,
|
||||||
|
'parameters': {
|
||||||
|
'quality': quality
|
||||||
|
},
|
||||||
|
'suffices': suffices,
|
||||||
|
'variant': f"Q{quality}",
|
||||||
|
'tokens': []}
|
||||||
|
|
||||||
|
return payload
|
||||||
|
|
||||||
|
|
||||||
|
def getYield(self):
|
||||||
|
for q in self.__qualitiesList:
|
||||||
|
yield self.getPayload(q)
|
@ -0,0 +1,6 @@
|
|||||||
|
from .filter import Filter
|
||||||
|
|
||||||
|
class ScaleFilter(Filter):
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__(self)
|
@ -1,11 +1,12 @@
|
|||||||
from textual.app import App, ComposeResult
|
from textual.app import ComposeResult
|
||||||
from textual.screen import Screen
|
from textual.screen import Screen
|
||||||
from textual.widgets import Header, Footer, Placeholder, Label
|
from textual.widgets import Footer, Placeholder
|
||||||
|
|
||||||
class HelpScreen(Screen):
|
class HelpScreen(Screen):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
context = self.app.getContext()
|
context = self.app.getContext()
|
||||||
|
|
||||||
def compose(self) -> ComposeResult:
|
def compose(self) -> ComposeResult:
|
||||||
yield Placeholder("Help Screen")
|
yield Placeholder("Help Screen")
|
||||||
yield Footer()
|
yield Footer()
|
@ -0,0 +1,187 @@
|
|||||||
|
import re, logging
|
||||||
|
|
||||||
|
from jinja2 import Environment, Undefined
|
||||||
|
from .constants import DEFAULT_OUTPUT_FILENAME_TEMPLATE
|
||||||
|
from .configuration_controller import ConfigurationController
|
||||||
|
|
||||||
|
|
||||||
|
class EmptyStringUndefined(Undefined):
|
||||||
|
def __str__(self):
|
||||||
|
return ''
|
||||||
|
|
||||||
|
|
||||||
|
DIFF_ADDED_KEY = 'added'
|
||||||
|
DIFF_REMOVED_KEY = 'removed'
|
||||||
|
DIFF_CHANGED_KEY = 'changed'
|
||||||
|
DIFF_UNCHANGED_KEY = 'unchanged'
|
||||||
|
|
||||||
|
|
||||||
|
def dictDiff(a : dict, b : dict):
|
||||||
|
|
||||||
|
a_keys = set(a.keys())
|
||||||
|
b_keys = set(b.keys())
|
||||||
|
|
||||||
|
a_only = a_keys - b_keys
|
||||||
|
b_only = b_keys - a_keys
|
||||||
|
a_b = a_keys & b_keys
|
||||||
|
|
||||||
|
changed = {k for k in a_b if a[k] != b[k]}
|
||||||
|
|
||||||
|
diffResult = {}
|
||||||
|
|
||||||
|
|
||||||
|
if a_only:
|
||||||
|
diffResult[DIFF_REMOVED_KEY] = a_only
|
||||||
|
diffResult[DIFF_UNCHANGED_KEY] = b_keys
|
||||||
|
if b_only:
|
||||||
|
diffResult[DIFF_ADDED_KEY] = b_only
|
||||||
|
if changed:
|
||||||
|
diffResult[DIFF_CHANGED_KEY] = changed
|
||||||
|
|
||||||
|
return diffResult
|
||||||
|
|
||||||
|
def dictCache(element: dict, cache: list = []):
|
||||||
|
for index in range(len(cache)):
|
||||||
|
diff = dictDiff(cache[index], element)
|
||||||
|
if not diff:
|
||||||
|
return index, cache
|
||||||
|
cache.append(element)
|
||||||
|
return -1, cache
|
||||||
|
|
||||||
|
|
||||||
|
def setDiff(a : set, b : set) -> set:
|
||||||
|
|
||||||
|
a_only = a - b
|
||||||
|
b_only = b - a
|
||||||
|
|
||||||
|
diffResult = {}
|
||||||
|
|
||||||
|
if a_only:
|
||||||
|
diffResult[DIFF_REMOVED_KEY] = a_only
|
||||||
|
if b_only:
|
||||||
|
diffResult[DIFF_ADDED_KEY] = b_only
|
||||||
|
|
||||||
|
return diffResult
|
||||||
|
|
||||||
|
|
||||||
|
def permutateList(inputList: list, permutation: list):
|
||||||
|
|
||||||
|
# 0,1,2: ABC
|
||||||
|
# 0,2,1: ACB
|
||||||
|
# 1,2,0: BCA
|
||||||
|
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def filterFilename(fileName: str) -> str:
|
||||||
|
"""This filter replaces charactes from TMDB responses with characters
|
||||||
|
less problemating when using in filenames or removes them"""
|
||||||
|
|
||||||
|
fileName = str(fileName).replace('/', '-')
|
||||||
|
fileName = str(fileName).replace(':', ';')
|
||||||
|
fileName = str(fileName).replace('*', '')
|
||||||
|
fileName = str(fileName).replace("'", '')
|
||||||
|
fileName = str(fileName).replace("?", '#')
|
||||||
|
|
||||||
|
return fileName.strip()
|
||||||
|
|
||||||
|
def substituteTmdbFilename(fileName: str) -> str:
|
||||||
|
"""If chaining this method with filterFilename use this one first as the latter will destroy some patterns"""
|
||||||
|
|
||||||
|
# This indicates filler episodes in TMDB episode names
|
||||||
|
fileName = str(fileName).replace(' (*)', '')
|
||||||
|
fileName = str(fileName).replace('(*)', '')
|
||||||
|
|
||||||
|
# This indicates the index of multi-episode files
|
||||||
|
episodePartMatch = re.search("\\(([0-9]+)\\)$", fileName)
|
||||||
|
if episodePartMatch is not None:
|
||||||
|
partSuffix = str(episodePartMatch.group(0))
|
||||||
|
partIndex = episodePartMatch.groups()[0]
|
||||||
|
fileName = str(fileName).replace(partSuffix, f"Teil {partIndex}")
|
||||||
|
|
||||||
|
# Also multi-episodes with first and last episode index
|
||||||
|
episodePartMatch = re.search("\\(([0-9]+)[-\\/]([0-9]+)\\)$", fileName)
|
||||||
|
if episodePartMatch is not None:
|
||||||
|
partSuffix = str(episodePartMatch.group(0))
|
||||||
|
partFirstIndex = episodePartMatch.groups()[0]
|
||||||
|
partLastIndex = episodePartMatch.groups()[1]
|
||||||
|
fileName = str(fileName).replace(partSuffix, f"Teil {partFirstIndex}-{partLastIndex}")
|
||||||
|
|
||||||
|
return fileName
|
||||||
|
|
||||||
|
|
||||||
|
def getEpisodeFileBasename(showName,
|
||||||
|
episodeName,
|
||||||
|
season,
|
||||||
|
episode,
|
||||||
|
indexSeasonDigits = 2,
|
||||||
|
indexEpisodeDigits = 2,
|
||||||
|
indicatorSeasonDigits = 2,
|
||||||
|
indicatorEpisodeDigits = 2,
|
||||||
|
context = None):
|
||||||
|
"""
|
||||||
|
One Piece:
|
||||||
|
indexSeasonDigits = 0,
|
||||||
|
indexEpisodeDigits = 4,
|
||||||
|
indicatorSeasonDigits = 2,
|
||||||
|
indicatorEpisodeDigits = 4
|
||||||
|
|
||||||
|
Three-Body:
|
||||||
|
indexSeasonDigits = 0,
|
||||||
|
indexEpisodeDigits = 2,
|
||||||
|
indicatorSeasonDigits = 2,
|
||||||
|
indicatorEpisodeDigits = 2
|
||||||
|
|
||||||
|
Dragonball:
|
||||||
|
indexSeasonDigits = 0,
|
||||||
|
indexEpisodeDigits = 3,
|
||||||
|
indicatorSeasonDigits = 2,
|
||||||
|
indicatorEpisodeDigits = 3
|
||||||
|
|
||||||
|
Boruto:
|
||||||
|
indexSeasonDigits = 0,
|
||||||
|
indexEpisodeDigits = 4,
|
||||||
|
indicatorSeasonDigits = 2,
|
||||||
|
indicatorEpisodeDigits = 4
|
||||||
|
"""
|
||||||
|
|
||||||
|
cc: ConfigurationController = context['config'] if context is not None and 'config' in context.keys() else None
|
||||||
|
configData = cc.getData() if cc is not None else {}
|
||||||
|
outputFilenameTemplate = configData.get(ConfigurationController.OUTPUT_FILENAME_TEMPLATE_KEY,
|
||||||
|
DEFAULT_OUTPUT_FILENAME_TEMPLATE)
|
||||||
|
|
||||||
|
if context is not None and 'logger' in context.keys():
|
||||||
|
logger = context['logger']
|
||||||
|
else:
|
||||||
|
logger = logging.getLogger('FFX')
|
||||||
|
logger.addHandler(logging.NullHandler())
|
||||||
|
|
||||||
|
|
||||||
|
indexSeparator = ' ' if indexSeasonDigits or indexEpisodeDigits else ''
|
||||||
|
seasonIndex = '{num:{fill}{width}}'.format(num=season, fill='0', width=indexSeasonDigits) if indexSeasonDigits else ''
|
||||||
|
episodeIndex = '{num:{fill}{width}}'.format(num=episode, fill='0', width=indexEpisodeDigits) if indexEpisodeDigits else ''
|
||||||
|
|
||||||
|
indicatorSeparator = ' - ' if indicatorSeasonDigits or indicatorEpisodeDigits else ''
|
||||||
|
seasonIndicator = 'S{num:{fill}{width}}'.format(num=season, fill='0', width=indicatorSeasonDigits) if indicatorSeasonDigits else ''
|
||||||
|
episodeIndicator = 'E{num:{fill}{width}}'.format(num=episode, fill='0', width=indicatorEpisodeDigits) if indicatorEpisodeDigits else ''
|
||||||
|
|
||||||
|
jinjaKwargs = {
|
||||||
|
'ffx_show_name': showName,
|
||||||
|
'ffx_index_separator': indexSeparator,
|
||||||
|
'ffx_season_index': str(seasonIndex),
|
||||||
|
'ffx_episode_index': str(episodeIndex),
|
||||||
|
'ffx_index': str(seasonIndex) + str(episodeIndex),
|
||||||
|
'ffx_episode_name': episodeName,
|
||||||
|
'ffx_indicator_separator': indicatorSeparator,
|
||||||
|
'ffx_season_indicator': str(seasonIndicator),
|
||||||
|
'ffx_episode_indicator': str(episodeIndicator),
|
||||||
|
'ffx_indicator': str(seasonIndicator) + str(episodeIndicator)
|
||||||
|
}
|
||||||
|
|
||||||
|
jinjaEnv = Environment(undefined=EmptyStringUndefined)
|
||||||
|
jinjaTemplate = jinjaEnv.from_string(outputFilenameTemplate)
|
||||||
|
return jinjaTemplate.render(**jinjaKwargs)
|
||||||
|
|
||||||
|
# return ''.join(filenameTokens)
|
||||||
|
|
@ -1,4 +1,4 @@
|
|||||||
import click, re
|
import click
|
||||||
|
|
||||||
from ffx.model.pattern import Pattern
|
from ffx.model.pattern import Pattern
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
from ffx.media_descriptor import MediaDescriptor
|
@ -0,0 +1,47 @@
|
|||||||
|
import os, sys, importlib, inspect, glob, re
|
||||||
|
|
||||||
|
from ffx.configuration_controller import ConfigurationController
|
||||||
|
from ffx.database import databaseContext
|
||||||
|
|
||||||
|
from sqlalchemy import Engine
|
||||||
|
from sqlalchemy.orm import sessionmaker
|
||||||
|
|
||||||
|
|
||||||
|
class Conversion():
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
|
||||||
|
self._context = {}
|
||||||
|
self._context['config'] = ConfigurationController()
|
||||||
|
|
||||||
|
self._context['database'] = databaseContext(databasePath=self._context['config'].getDatabaseFilePath())
|
||||||
|
|
||||||
|
self.__databaseSession: sessionmaker = self._context['database']['session']
|
||||||
|
self.__databaseEngine: Engine = self._context['database']['engine']
|
||||||
|
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def list():
|
||||||
|
|
||||||
|
basePath = os.path.dirname(__file__)
|
||||||
|
|
||||||
|
filenamePattern = re.compile("conversion_([0-9]+)_([0-9]+)\\.py")
|
||||||
|
|
||||||
|
filenameList = [os.path.basename(fp) for fp in glob.glob(f"{ basePath }/*.py") if fp != __file__]
|
||||||
|
|
||||||
|
versionTupleList = [(fm.group(1), fm.group(2)) for fn in filenameList if (fm := filenamePattern.search(fn))]
|
||||||
|
|
||||||
|
return versionTupleList
|
||||||
|
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def getClassReference(versionFrom, versionTo):
|
||||||
|
importlib.import_module(f"ffx.model.conversions.conversion_{ versionFrom }_{ versionTo }")
|
||||||
|
for name, obj in inspect.getmembers(sys.modules[f"ffx.model.conversions.conversion_{ versionFrom }_{ versionTo }"]):
|
||||||
|
#HINT: Excluding DispositionCombination as it seems to be included by import (?)
|
||||||
|
if inspect.isclass(obj) and name != 'Conversion' and name.startswith('Conversion'):
|
||||||
|
return obj
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def getAllClassReferences():
|
||||||
|
return [Conversion.getClassReference(verFrom, verTo) for verFrom, verTo in Conversion.list()]
|
@ -0,0 +1,17 @@
|
|||||||
|
import os, sys, importlib, inspect, glob, re
|
||||||
|
|
||||||
|
from .conversion import Conversion
|
||||||
|
|
||||||
|
|
||||||
|
class Conversion_2_3(Conversion):
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
def applyConversion(self):
|
||||||
|
|
||||||
|
s = self.__databaseSession()
|
||||||
|
e = self.__databaseEngine
|
||||||
|
|
||||||
|
with e.connect() as c:
|
||||||
|
c.execute("ALTER TABLE user ADD COLUMN email VARCHAR(255)")
|
@ -0,0 +1,7 @@
|
|||||||
|
import os, sys, importlib, inspect, glob, re
|
||||||
|
|
||||||
|
from .conversion import Conversion
|
||||||
|
|
||||||
|
|
||||||
|
class Conversion_3_4(Conversion):
|
||||||
|
pass
|
@ -1,14 +1,14 @@
|
|||||||
import click
|
import click
|
||||||
|
|
||||||
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey
|
from sqlalchemy import Column, Integer, String, ForeignKey
|
||||||
from sqlalchemy.orm import relationship, sessionmaker, Mapped, backref
|
from sqlalchemy.orm import relationship
|
||||||
|
|
||||||
from .show import Base, Show
|
from .show import Base, Show
|
||||||
from .track import Track
|
|
||||||
|
|
||||||
from ffx.media_descriptor import MediaDescriptor
|
from ffx.media_descriptor import MediaDescriptor
|
||||||
from ffx.show_descriptor import ShowDescriptor
|
from ffx.show_descriptor import ShowDescriptor
|
||||||
|
|
||||||
|
|
||||||
class Pattern(Base):
|
class Pattern(Base):
|
||||||
|
|
||||||
__tablename__ = 'patterns'
|
__tablename__ = 'patterns'
|
@ -0,0 +1,16 @@
|
|||||||
|
# from typing import List
|
||||||
|
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey, Enum
|
||||||
|
from sqlalchemy.orm import relationship, declarative_base, sessionmaker
|
||||||
|
|
||||||
|
from .show import Base
|
||||||
|
|
||||||
|
|
||||||
|
class Property(Base):
|
||||||
|
|
||||||
|
__tablename__ = 'properties'
|
||||||
|
|
||||||
|
# v1.x
|
||||||
|
id = Column(Integer, primary_key=True)
|
||||||
|
|
||||||
|
key = Column(String)
|
||||||
|
value = Column(String)
|
@ -0,0 +1,71 @@
|
|||||||
|
import click
|
||||||
|
|
||||||
|
from sqlalchemy import Column, Integer, ForeignKey
|
||||||
|
from sqlalchemy.orm import relationship
|
||||||
|
|
||||||
|
from .show import Base, Show
|
||||||
|
|
||||||
|
|
||||||
|
class ShiftedSeason(Base):
|
||||||
|
|
||||||
|
__tablename__ = 'shifted_seasons'
|
||||||
|
|
||||||
|
# v1.x
|
||||||
|
id = Column(Integer, primary_key=True)
|
||||||
|
|
||||||
|
|
||||||
|
# v2.0
|
||||||
|
# id: Mapped[int] = mapped_column(Integer, primary_key=True)
|
||||||
|
# pattern: Mapped[str] = mapped_column(String, nullable=False)
|
||||||
|
|
||||||
|
# v1.x
|
||||||
|
show_id = Column(Integer, ForeignKey('shows.id', ondelete="CASCADE"))
|
||||||
|
show = relationship(Show, back_populates='shifted_seasons', lazy='joined')
|
||||||
|
|
||||||
|
# v2.0
|
||||||
|
# show_id: Mapped[int] = mapped_column(ForeignKey("shows.id", ondelete="CASCADE"))
|
||||||
|
# show: Mapped["Show"] = relationship(back_populates="patterns")
|
||||||
|
|
||||||
|
|
||||||
|
original_season = Column(Integer)
|
||||||
|
|
||||||
|
first_episode = Column(Integer, default = -1)
|
||||||
|
last_episode = Column(Integer, default = -1)
|
||||||
|
|
||||||
|
season_offset = Column(Integer, default = 0)
|
||||||
|
episode_offset = Column(Integer, default = 0)
|
||||||
|
|
||||||
|
|
||||||
|
def getId(self):
|
||||||
|
return self.id
|
||||||
|
|
||||||
|
|
||||||
|
def getOriginalSeason(self):
|
||||||
|
return self.original_season
|
||||||
|
|
||||||
|
def getFirstEpisode(self):
|
||||||
|
return self.first_episode
|
||||||
|
|
||||||
|
def getLastEpisode(self):
|
||||||
|
return self.last_episode
|
||||||
|
|
||||||
|
|
||||||
|
def getSeasonOffset(self):
|
||||||
|
return self.season_offset
|
||||||
|
|
||||||
|
def getEpisodeOffset(self):
|
||||||
|
return self.episode_offset
|
||||||
|
|
||||||
|
|
||||||
|
def getObj(self):
|
||||||
|
|
||||||
|
shiftedSeasonObj = {}
|
||||||
|
|
||||||
|
shiftedSeasonObj['original_season'] = self.getOriginalSeason()
|
||||||
|
shiftedSeasonObj['first_episode'] = self.getFirstEpisode()
|
||||||
|
shiftedSeasonObj['last_episode'] = self.getLastEpisode()
|
||||||
|
shiftedSeasonObj['season_offset'] = self.getSeasonOffset()
|
||||||
|
shiftedSeasonObj['episode_offset'] = self.getEpisodeOffset()
|
||||||
|
|
||||||
|
return shiftedSeasonObj
|
||||||
|
|
@ -0,0 +1,39 @@
|
|||||||
|
import subprocess, logging
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
def executeProcess(commandSequence: List[str], directory: str = None, context: dict = None):
|
||||||
|
"""
|
||||||
|
niceness -20 bis +19
|
||||||
|
cpu_percent: 1 bis 99
|
||||||
|
"""
|
||||||
|
|
||||||
|
if context is None:
|
||||||
|
logger = logging.getLogger('FFX')
|
||||||
|
logger.addHandler(logging.NullHandler())
|
||||||
|
else:
|
||||||
|
logger = context['logger']
|
||||||
|
|
||||||
|
niceSequence = []
|
||||||
|
|
||||||
|
niceness = (int(context['resource_limits']['niceness'])
|
||||||
|
if not context is None
|
||||||
|
and 'resource_limits' in context.keys()
|
||||||
|
and 'niceness' in context['resource_limits'].keys() else 99)
|
||||||
|
cpu_percent = (int(context['resource_limits']['cpu_percent'])
|
||||||
|
if not context is None
|
||||||
|
and 'resource_limits' in context.keys()
|
||||||
|
and 'cpu_percent' in context['resource_limits'].keys() else 0)
|
||||||
|
|
||||||
|
if niceness >= -20 and niceness <= 19:
|
||||||
|
niceSequence += ['nice', '-n', str(niceness)]
|
||||||
|
if cpu_percent >= 1:
|
||||||
|
niceSequence += ['cpulimit', '-l', str(cpu_percent), '--']
|
||||||
|
|
||||||
|
niceCommand = niceSequence + commandSequence
|
||||||
|
|
||||||
|
logger.debug(f"executeProcess() command sequence: {' '.join(niceCommand)}")
|
||||||
|
|
||||||
|
process = subprocess.Popen(niceCommand, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='utf-8', cwd = directory)
|
||||||
|
output, error = process.communicate()
|
||||||
|
|
||||||
|
return output, error, process.returncode
|
@ -1,6 +1,7 @@
|
|||||||
from textual.app import App, ComposeResult
|
from textual.app import ComposeResult
|
||||||
from textual.screen import Screen
|
from textual.screen import Screen
|
||||||
from textual.widgets import Header, Footer, Placeholder, Label
|
from textual.widgets import Footer, Placeholder
|
||||||
|
|
||||||
|
|
||||||
class SettingsScreen(Screen):
|
class SettingsScreen(Screen):
|
||||||
def __init__(self):
|
def __init__(self):
|
@ -0,0 +1,223 @@
|
|||||||
|
import click
|
||||||
|
|
||||||
|
from ffx.model.shifted_season import ShiftedSeason
|
||||||
|
|
||||||
|
|
||||||
|
class EpisodeOrderException(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class RangeOverlapException(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class ShiftedSeasonController():
|
||||||
|
|
||||||
|
def __init__(self, context):
|
||||||
|
|
||||||
|
self.context = context
|
||||||
|
self.Session = self.context['database']['session'] # convenience
|
||||||
|
|
||||||
|
def checkShiftedSeason(self, showId: int, shiftedSeasonObj: dict, shiftedSeasonId: int = 0):
|
||||||
|
try:
|
||||||
|
s = self.Session()
|
||||||
|
|
||||||
|
firstEpisode = int(shiftedSeasonObj['first_episode'])
|
||||||
|
lastEpisode = int(shiftedSeasonObj['last_episode'])
|
||||||
|
|
||||||
|
q = s.query(ShiftedSeason).filter(ShiftedSeason.show_id == int(showId))
|
||||||
|
if shiftedSeasonId:
|
||||||
|
q = q.filter(ShiftedSeason.id != int(shiftedSeasonId))
|
||||||
|
|
||||||
|
siblingShiftedSeason: ShiftedSeason
|
||||||
|
for siblingShiftedSeason in q.all():
|
||||||
|
|
||||||
|
siblingFirstEpisode = siblingShiftedSeason.getFirstEpisode()
|
||||||
|
siblingLastEpisode = siblingShiftedSeason.getLastEpisode()
|
||||||
|
|
||||||
|
if (lastEpisode >= siblingFirstEpisode
|
||||||
|
and siblingLastEpisode >= firstEpisode):
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
except Exception as ex:
|
||||||
|
raise click.ClickException(f"ShiftedSeasonController.addShiftedSeason(): {repr(ex)}")
|
||||||
|
finally:
|
||||||
|
s.close()
|
||||||
|
|
||||||
|
|
||||||
|
def addShiftedSeason(self, showId: int, shiftedSeasonObj: dict):
|
||||||
|
|
||||||
|
if type(showId) is not int:
|
||||||
|
raise ValueError(f"ShiftedSeasonController.addShiftedSeason(): Argument showId is required to be of type int")
|
||||||
|
|
||||||
|
if type(shiftedSeasonObj) is not dict:
|
||||||
|
raise ValueError(f"ShiftedSeasonController.addShiftedSeason(): Argument shiftedSeasonObj is required to be of type dict")
|
||||||
|
|
||||||
|
try:
|
||||||
|
s = self.Session()
|
||||||
|
|
||||||
|
firstEpisode = int(shiftedSeasonObj['first_episode'])
|
||||||
|
lastEpisode = int(shiftedSeasonObj['last_episode'])
|
||||||
|
|
||||||
|
if lastEpisode < firstEpisode:
|
||||||
|
raise EpisodeOrderException()
|
||||||
|
|
||||||
|
q = s.query(ShiftedSeason).filter(ShiftedSeason.show_id == int(showId))
|
||||||
|
|
||||||
|
shiftedSeason = ShiftedSeason(show_id = int(showId),
|
||||||
|
original_season = int(shiftedSeasonObj['original_season']),
|
||||||
|
first_episode = firstEpisode,
|
||||||
|
last_episode = lastEpisode,
|
||||||
|
season_offset = int(shiftedSeasonObj['season_offset']),
|
||||||
|
episode_offset = int(shiftedSeasonObj['episode_offset']))
|
||||||
|
s.add(shiftedSeason)
|
||||||
|
s.commit()
|
||||||
|
return shiftedSeason.getId()
|
||||||
|
|
||||||
|
except Exception as ex:
|
||||||
|
raise click.ClickException(f"ShiftedSeasonController.addShiftedSeason(): {repr(ex)}")
|
||||||
|
finally:
|
||||||
|
s.close()
|
||||||
|
|
||||||
|
|
||||||
|
def updateShiftedSeason(self, shiftedSeasonId: int, shiftedSeasonObj: dict):
|
||||||
|
|
||||||
|
if type(shiftedSeasonId) is not int:
|
||||||
|
raise ValueError(f"ShiftedSeasonController.updateShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
||||||
|
|
||||||
|
if type(shiftedSeasonObj) is not dict:
|
||||||
|
raise ValueError(f"ShiftedSeasonController.updateShiftedSeason(): Argument shiftedSeasonObj is required to be of type dict")
|
||||||
|
|
||||||
|
try:
|
||||||
|
s = self.Session()
|
||||||
|
|
||||||
|
q = s.query(ShiftedSeason).filter(ShiftedSeason.id == int(shiftedSeasonId))
|
||||||
|
|
||||||
|
if q.count():
|
||||||
|
|
||||||
|
shiftedSeason = q.first()
|
||||||
|
|
||||||
|
shiftedSeason.original_season = int(shiftedSeasonObj['original_season'])
|
||||||
|
shiftedSeason.first_episode = int(shiftedSeasonObj['first_episode'])
|
||||||
|
shiftedSeason.last_episode = int(shiftedSeasonObj['last_episode'])
|
||||||
|
shiftedSeason.season_offset = int(shiftedSeasonObj['season_offset'])
|
||||||
|
shiftedSeason.episode_offset = int(shiftedSeasonObj['episode_offset'])
|
||||||
|
|
||||||
|
s.commit()
|
||||||
|
return True
|
||||||
|
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
|
||||||
|
except Exception as ex:
|
||||||
|
raise click.ClickException(f"ShiftedSeasonController.updateShiftedSeason(): {repr(ex)}")
|
||||||
|
finally:
|
||||||
|
s.close()
|
||||||
|
|
||||||
|
|
||||||
|
def findShiftedSeason(self, showId: int, originalSeason: int, firstEpisode: int, lastEpisode: int):
|
||||||
|
|
||||||
|
if type(showId) is not int:
|
||||||
|
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
||||||
|
|
||||||
|
if type(originalSeason) is not int:
|
||||||
|
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument originalSeason is required to be of type int")
|
||||||
|
|
||||||
|
if type(firstEpisode) is not int:
|
||||||
|
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument firstEpisode is required to be of type int")
|
||||||
|
|
||||||
|
if type(lastEpisode) is not int:
|
||||||
|
raise ValueError(f"ShiftedSeasonController.findShiftedSeason(): Argument lastEpisode is required to be of type int")
|
||||||
|
|
||||||
|
try:
|
||||||
|
s = self.Session()
|
||||||
|
q = s.query(ShiftedSeason).filter(ShiftedSeason.show_id == int(showId),
|
||||||
|
ShiftedSeason.original_season == int(originalSeason),
|
||||||
|
ShiftedSeason.first_episode == int(firstEpisode),
|
||||||
|
ShiftedSeason.last_episode == int(lastEpisode))
|
||||||
|
|
||||||
|
return q.first().getId() if q.count() else None
|
||||||
|
|
||||||
|
except Exception as ex:
|
||||||
|
raise click.ClickException(f"PatternController.findShiftedSeason(): {repr(ex)}")
|
||||||
|
finally:
|
||||||
|
s.close()
|
||||||
|
|
||||||
|
def getShiftedSeasonSiblings(self, showId: int):
|
||||||
|
|
||||||
|
if type(showId) is not int:
|
||||||
|
raise ValueError(f"ShiftedSeasonController.getShiftedSeasonSiblings(): Argument shiftedSeasonId is required to be of type int")
|
||||||
|
|
||||||
|
try:
|
||||||
|
s = self.Session()
|
||||||
|
q = s.query(ShiftedSeason).filter(ShiftedSeason.show_id == int(showId))
|
||||||
|
|
||||||
|
return q.all()
|
||||||
|
|
||||||
|
except Exception as ex:
|
||||||
|
raise click.ClickException(f"PatternController.getShiftedSeasonSiblings(): {repr(ex)}")
|
||||||
|
finally:
|
||||||
|
s.close()
|
||||||
|
|
||||||
|
|
||||||
|
def getShiftedSeason(self, shiftedSeasonId: int):
|
||||||
|
|
||||||
|
if type(shiftedSeasonId) is not int:
|
||||||
|
raise ValueError(f"ShiftedSeasonController.getShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
||||||
|
|
||||||
|
try:
|
||||||
|
s = self.Session()
|
||||||
|
q = s.query(ShiftedSeason).filter(ShiftedSeason.id == int(shiftedSeasonId))
|
||||||
|
|
||||||
|
return q.first() if q.count() else None
|
||||||
|
|
||||||
|
except Exception as ex:
|
||||||
|
raise click.ClickException(f"ShiftedSeasonController.getShiftedSeason(): {repr(ex)}")
|
||||||
|
finally:
|
||||||
|
s.close()
|
||||||
|
|
||||||
|
|
||||||
|
def deleteShiftedSeason(self, shiftedSeasonId):
|
||||||
|
|
||||||
|
if type(shiftedSeasonId) is not int:
|
||||||
|
raise ValueError(f"ShiftedSeasonController.deleteShiftedSeason(): Argument shiftedSeasonId is required to be of type int")
|
||||||
|
|
||||||
|
try:
|
||||||
|
s = self.Session()
|
||||||
|
q = s.query(ShiftedSeason).filter(ShiftedSeason.id == int(shiftedSeasonId))
|
||||||
|
|
||||||
|
if q.count():
|
||||||
|
|
||||||
|
#DAFUQ: https://stackoverflow.com/a/19245058
|
||||||
|
# q.delete()
|
||||||
|
shiftedSeason = q.first()
|
||||||
|
s.delete(shiftedSeason)
|
||||||
|
|
||||||
|
s.commit()
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
except Exception as ex:
|
||||||
|
raise click.ClickException(f"ShiftedSeasonController.deleteShiftedSeason(): {repr(ex)}")
|
||||||
|
finally:
|
||||||
|
s.close()
|
||||||
|
|
||||||
|
|
||||||
|
def shiftSeason(self, showId, season, episode):
|
||||||
|
|
||||||
|
shiftedSeasonEntry: ShiftedSeason
|
||||||
|
for shiftedSeasonEntry in self.getShiftedSeasonSiblings(showId):
|
||||||
|
|
||||||
|
if (season == shiftedSeasonEntry.getOriginalSeason()
|
||||||
|
and (shiftedSeasonEntry.getFirstEpisode() == -1 or episode >= shiftedSeasonEntry.getFirstEpisode())
|
||||||
|
and (shiftedSeasonEntry.getLastEpisode() == -1 or episode <= shiftedSeasonEntry.getLastEpisode())):
|
||||||
|
|
||||||
|
shiftedSeason = season + shiftedSeasonEntry.getSeasonOffset()
|
||||||
|
shiftedEpisode = episode + shiftedSeasonEntry.getEpisodeOffset()
|
||||||
|
|
||||||
|
self.context['logger'].info(f"Shifting season: {season} episode: {episode} "
|
||||||
|
+f"-> season: {shiftedSeason} episode: {shiftedEpisode}")
|
||||||
|
|
||||||
|
return shiftedSeason, shiftedEpisode
|
||||||
|
|
||||||
|
return season, episode
|
@ -0,0 +1,125 @@
|
|||||||
|
import click
|
||||||
|
|
||||||
|
from textual.screen import Screen
|
||||||
|
from textual.widgets import Header, Footer, Static, Button
|
||||||
|
from textual.containers import Grid
|
||||||
|
|
||||||
|
from .shifted_season_controller import ShiftedSeasonController
|
||||||
|
|
||||||
|
from ffx.model.shifted_season import ShiftedSeason
|
||||||
|
|
||||||
|
|
||||||
|
# Screen[dict[int, str, int]]
|
||||||
|
class ShiftedSeasonDeleteScreen(Screen):
|
||||||
|
|
||||||
|
CSS = """
|
||||||
|
|
||||||
|
Grid {
|
||||||
|
grid-size: 2;
|
||||||
|
grid-rows: 2 auto;
|
||||||
|
grid-columns: 30 330;
|
||||||
|
height: 100%;
|
||||||
|
width: 100%;
|
||||||
|
padding: 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
Input {
|
||||||
|
border: none;
|
||||||
|
}
|
||||||
|
Button {
|
||||||
|
border: none;
|
||||||
|
}
|
||||||
|
#toplabel {
|
||||||
|
height: 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
.two {
|
||||||
|
column-span: 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
.box {
|
||||||
|
height: 100%;
|
||||||
|
border: solid green;
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, showId = None, shiftedSeasonId = None):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
self.context = self.app.getContext()
|
||||||
|
self.Session = self.context['database']['session'] # convenience
|
||||||
|
|
||||||
|
self.__ssc = ShiftedSeasonController(context = self.context)
|
||||||
|
|
||||||
|
self._showId = showId
|
||||||
|
self.__shiftedSeasonId = shiftedSeasonId
|
||||||
|
|
||||||
|
|
||||||
|
def on_mount(self):
|
||||||
|
|
||||||
|
shiftedSeason: ShiftedSeason = self.__ssc.getShiftedSeason(self.__shiftedSeasonId)
|
||||||
|
|
||||||
|
self.query_one("#static_show_id", Static).update(str(self._showId))
|
||||||
|
self.query_one("#static_original_season", Static).update(str(shiftedSeason.getOriginalSeason()))
|
||||||
|
self.query_one("#static_first_episode", Static).update(str(shiftedSeason.getFirstEpisode()))
|
||||||
|
self.query_one("#static_last_episode", Static).update(str(shiftedSeason.getLastEpisode()))
|
||||||
|
self.query_one("#static_season_offset", Static).update(str(shiftedSeason.getSeasonOffset()))
|
||||||
|
self.query_one("#static_episode_offset", Static).update(str(shiftedSeason.getEpisodeOffset()))
|
||||||
|
|
||||||
|
|
||||||
|
def compose(self):
|
||||||
|
|
||||||
|
yield Header()
|
||||||
|
|
||||||
|
with Grid():
|
||||||
|
|
||||||
|
yield Static("Are you sure to delete the following shifted season?", id="toplabel", classes="two")
|
||||||
|
|
||||||
|
yield Static(" ", classes="two")
|
||||||
|
|
||||||
|
yield Static("from show")
|
||||||
|
yield Static(" ", id="static_show_id")
|
||||||
|
|
||||||
|
yield Static(" ", classes="two")
|
||||||
|
|
||||||
|
yield Static("Original season")
|
||||||
|
yield Static(" ", id="static_original_season")
|
||||||
|
|
||||||
|
yield Static("First episode")
|
||||||
|
yield Static(" ", id="static_first_episode")
|
||||||
|
|
||||||
|
yield Static("Last episode")
|
||||||
|
yield Static(" ", id="static_last_episode")
|
||||||
|
|
||||||
|
yield Static("Season offset")
|
||||||
|
yield Static(" ", id="static_season_offset")
|
||||||
|
|
||||||
|
yield Static("Episode offset")
|
||||||
|
yield Static(" ", id="static_episode_offset")
|
||||||
|
|
||||||
|
yield Static(" ", classes="two")
|
||||||
|
|
||||||
|
yield Button("Delete", id="delete_button")
|
||||||
|
yield Button("Cancel", id="cancel_button")
|
||||||
|
|
||||||
|
yield Footer()
|
||||||
|
|
||||||
|
|
||||||
|
# Event handler for button press
|
||||||
|
def on_button_pressed(self, event: Button.Pressed) -> None:
|
||||||
|
|
||||||
|
if event.button.id == "delete_button":
|
||||||
|
|
||||||
|
if self.__shiftedSeasonId is None:
|
||||||
|
raise click.ClickException('ShiftedSeasonDeleteScreen.on_button_pressed(): shifted season id is undefined')
|
||||||
|
|
||||||
|
if self.__ssc.deleteShiftedSeason(self.__shiftedSeasonId):
|
||||||
|
self.dismiss(self.__shiftedSeasonId)
|
||||||
|
|
||||||
|
else:
|
||||||
|
#TODO: Meldung
|
||||||
|
self.app.pop_screen()
|
||||||
|
|
||||||
|
if event.button.id == "cancel_button":
|
||||||
|
self.app.pop_screen()
|
||||||
|
|
@ -0,0 +1,221 @@
|
|||||||
|
from typing import List
|
||||||
|
|
||||||
|
from textual.screen import Screen
|
||||||
|
from textual.widgets import Header, Footer, Static, Button, Input
|
||||||
|
from textual.containers import Grid
|
||||||
|
|
||||||
|
from .shifted_season_controller import ShiftedSeasonController
|
||||||
|
|
||||||
|
from ffx.model.shifted_season import ShiftedSeason
|
||||||
|
|
||||||
|
|
||||||
|
# Screen[dict[int, str, int]]
|
||||||
|
class ShiftedSeasonDetailsScreen(Screen):
|
||||||
|
|
||||||
|
CSS = """
|
||||||
|
|
||||||
|
Grid {
|
||||||
|
grid-size: 3 10;
|
||||||
|
grid-rows: 2 2 2 2 2 2 2 2 2 2;
|
||||||
|
grid-columns: 40 40 40;
|
||||||
|
height: 100%;
|
||||||
|
width: 100%;
|
||||||
|
padding: 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
Input {
|
||||||
|
border: none;
|
||||||
|
}
|
||||||
|
Button {
|
||||||
|
border: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
DataTable {
|
||||||
|
min-height: 6;
|
||||||
|
}
|
||||||
|
|
||||||
|
DataTable .datatable--cursor {
|
||||||
|
background: darkorange;
|
||||||
|
color: black;
|
||||||
|
}
|
||||||
|
|
||||||
|
DataTable .datatable--header {
|
||||||
|
background: steelblue;
|
||||||
|
color: white;
|
||||||
|
}
|
||||||
|
|
||||||
|
#toplabel {
|
||||||
|
height: 1;
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
.two {
|
||||||
|
column-span: 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
.three {
|
||||||
|
column-span: 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
.four {
|
||||||
|
column-span: 4;
|
||||||
|
}
|
||||||
|
.five {
|
||||||
|
column-span: 5;
|
||||||
|
}
|
||||||
|
.six {
|
||||||
|
column-span: 6;
|
||||||
|
}
|
||||||
|
.seven {
|
||||||
|
column-span: 7;
|
||||||
|
}
|
||||||
|
|
||||||
|
.box {
|
||||||
|
height: 100%;
|
||||||
|
border: solid green;
|
||||||
|
}
|
||||||
|
|
||||||
|
.yellow {
|
||||||
|
tint: yellow 40%;
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, showId = None, shiftedSeasonId = None):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
self.context = self.app.getContext()
|
||||||
|
self.Session = self.context['database']['session'] # convenience
|
||||||
|
|
||||||
|
self.__ssc = ShiftedSeasonController(context = self.context)
|
||||||
|
|
||||||
|
self.__showId = showId
|
||||||
|
self.__shiftedSeasonId = shiftedSeasonId
|
||||||
|
|
||||||
|
def on_mount(self):
|
||||||
|
|
||||||
|
if self.__shiftedSeasonId is not None:
|
||||||
|
shiftedSeason: ShiftedSeason = self.__ssc.getShiftedSeason(self.__shiftedSeasonId)
|
||||||
|
|
||||||
|
originalSeason = shiftedSeason.getOriginalSeason()
|
||||||
|
self.query_one("#input_original_season", Input).value = str(originalSeason)
|
||||||
|
|
||||||
|
firstEpisode = shiftedSeason.getFirstEpisode()
|
||||||
|
self.query_one("#input_first_episode", Input).value = str(firstEpisode) if firstEpisode != -1 else ''
|
||||||
|
|
||||||
|
lastEpisode = shiftedSeason.getLastEpisode()
|
||||||
|
self.query_one("#input_last_episode", Input).value = str(lastEpisode) if lastEpisode != -1 else ''
|
||||||
|
|
||||||
|
seasonOffset = shiftedSeason.getSeasonOffset()
|
||||||
|
self.query_one("#input_season_offset", Input).value = str(seasonOffset) if seasonOffset else ''
|
||||||
|
|
||||||
|
episodeOffset = shiftedSeason.getEpisodeOffset()
|
||||||
|
self.query_one("#input_episode_offset", Input).value = str(episodeOffset) if episodeOffset else ''
|
||||||
|
|
||||||
|
|
||||||
|
def compose(self):
|
||||||
|
|
||||||
|
yield Header()
|
||||||
|
|
||||||
|
with Grid():
|
||||||
|
|
||||||
|
# 1
|
||||||
|
yield Static("Edit shifted season" if self.__shiftedSeasonId is not None else "New shifted season", id="toplabel", classes="three")
|
||||||
|
|
||||||
|
# 2
|
||||||
|
yield Static(" ", classes="three")
|
||||||
|
|
||||||
|
# 3
|
||||||
|
yield Static("Original season")
|
||||||
|
yield Input(id="input_original_season", classes="two")
|
||||||
|
|
||||||
|
# 4
|
||||||
|
yield Static("First Episode")
|
||||||
|
yield Input(id="input_first_episode", classes="two")
|
||||||
|
|
||||||
|
# 5
|
||||||
|
yield Static("Last Episode")
|
||||||
|
yield Input(id="input_last_episode", classes="two")
|
||||||
|
|
||||||
|
# 6
|
||||||
|
yield Static("Season offset")
|
||||||
|
yield Input(id="input_season_offset", classes="two")
|
||||||
|
|
||||||
|
# 7
|
||||||
|
yield Static("Episode offset")
|
||||||
|
yield Input(id="input_episode_offset", classes="two")
|
||||||
|
|
||||||
|
# 8
|
||||||
|
yield Static(" ", classes="three")
|
||||||
|
|
||||||
|
# 9
|
||||||
|
yield Button("Save", id="save_button")
|
||||||
|
yield Button("Cancel", id="cancel_button")
|
||||||
|
yield Static(" ")
|
||||||
|
|
||||||
|
# 10
|
||||||
|
yield Static(" ", classes="three")
|
||||||
|
|
||||||
|
yield Footer()
|
||||||
|
|
||||||
|
|
||||||
|
def getShiftedSeasonObjFromInput(self):
|
||||||
|
|
||||||
|
shiftedSeasonObj = {}
|
||||||
|
|
||||||
|
originalSeason = self.query_one("#input_original_season", Input).value
|
||||||
|
if not originalSeason:
|
||||||
|
return None
|
||||||
|
shiftedSeasonObj['original_season'] = int(originalSeason)
|
||||||
|
|
||||||
|
try:
|
||||||
|
shiftedSeasonObj['first_episode'] = int(self.query_one("#input_first_episode", Input).value)
|
||||||
|
except ValueError:
|
||||||
|
shiftedSeasonObj['first_episode'] = -1
|
||||||
|
|
||||||
|
try:
|
||||||
|
shiftedSeasonObj['last_episode'] = int(self.query_one("#input_last_episode", Input).value)
|
||||||
|
except ValueError:
|
||||||
|
shiftedSeasonObj['last_episode'] = -1
|
||||||
|
|
||||||
|
try:
|
||||||
|
shiftedSeasonObj['season_offset'] = int(self.query_one("#input_season_offset", Input).value)
|
||||||
|
except ValueError:
|
||||||
|
shiftedSeasonObj['season_offset'] = 0
|
||||||
|
|
||||||
|
try:
|
||||||
|
shiftedSeasonObj['episode_offset'] = int(self.query_one("#input_episode_offset", Input).value)
|
||||||
|
except ValueError:
|
||||||
|
shiftedSeasonObj['episode_offset'] = 0
|
||||||
|
|
||||||
|
return shiftedSeasonObj
|
||||||
|
|
||||||
|
|
||||||
|
# Event handler for button press
|
||||||
|
def on_button_pressed(self, event: Button.Pressed) -> None:
|
||||||
|
|
||||||
|
# Check if the button pressed is the one we are interested in
|
||||||
|
if event.button.id == "save_button":
|
||||||
|
|
||||||
|
shiftedSeasonObj = self.getShiftedSeasonObjFromInput()
|
||||||
|
|
||||||
|
if shiftedSeasonObj is not None:
|
||||||
|
|
||||||
|
if self.__shiftedSeasonId is not None:
|
||||||
|
|
||||||
|
if self.__ssc.checkShiftedSeason(self.__showId, shiftedSeasonObj,
|
||||||
|
shiftedSeasonId = self.__shiftedSeasonId):
|
||||||
|
if self.__ssc.updateShiftedSeason(self.__shiftedSeasonId, shiftedSeasonObj):
|
||||||
|
self.dismiss((self.__shiftedSeasonId, shiftedSeasonObj))
|
||||||
|
else:
|
||||||
|
#TODO: Meldung
|
||||||
|
self.app.pop_screen()
|
||||||
|
|
||||||
|
else:
|
||||||
|
if self.__ssc.checkShiftedSeason(self.__showId, shiftedSeasonObj):
|
||||||
|
self.__shiftedSeasonId = self.__ssc.addShiftedSeason(self.__showId, shiftedSeasonObj)
|
||||||
|
self.dismiss((self.__shiftedSeasonId, shiftedSeasonObj))
|
||||||
|
|
||||||
|
|
||||||
|
if event.button.id == "cancel_button":
|
||||||
|
self.app.pop_screen()
|
@ -0,0 +1,79 @@
|
|||||||
|
import os, sys, importlib, glob, inspect
|
||||||
|
|
||||||
|
from ffx.track_disposition import TrackDisposition
|
||||||
|
from .disposition_combinator_2 import DispositionCombinator2
|
||||||
|
|
||||||
|
|
||||||
|
class DispositionCombinator21(DispositionCombinator2):
|
||||||
|
|
||||||
|
VARIANT = 'D10'
|
||||||
|
|
||||||
|
|
||||||
|
def __init__(self, context = None,
|
||||||
|
createPresets: bool = False):
|
||||||
|
super().__init__(context)
|
||||||
|
|
||||||
|
self.__createPresets = createPresets
|
||||||
|
|
||||||
|
def getVariant(self):
|
||||||
|
return DispositionCombinator21.VARIANT
|
||||||
|
|
||||||
|
|
||||||
|
def getPayload(self):
|
||||||
|
|
||||||
|
if self.__createPresets:
|
||||||
|
subtrack0 = set()
|
||||||
|
subtrack1 = set([TrackDisposition.DEFAULT])
|
||||||
|
else:
|
||||||
|
subtrack0 = set([TrackDisposition.DEFAULT])
|
||||||
|
subtrack1 = set()
|
||||||
|
|
||||||
|
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
||||||
|
# so some checks for preserved dispositions are omitted for now
|
||||||
|
if self.__createPresets:
|
||||||
|
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
||||||
|
subtrack1.add(TrackDisposition.FORCED) # DESCRIPTIONS
|
||||||
|
|
||||||
|
return (subtrack0,
|
||||||
|
subtrack1)
|
||||||
|
|
||||||
|
def createAssertFunc(self):
|
||||||
|
|
||||||
|
if self.__createPresets:
|
||||||
|
|
||||||
|
|
||||||
|
def f(assertObj: dict):
|
||||||
|
if not 'tracks' in assertObj.keys():
|
||||||
|
raise KeyError("assertObj does not contain key 'tracks'")
|
||||||
|
trackDescriptors = assertObj['tracks']
|
||||||
|
|
||||||
|
# source subIndex 0
|
||||||
|
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
||||||
|
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
||||||
|
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set 'comment' disposition"
|
||||||
|
# source subIndex 1
|
||||||
|
assert not (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
||||||
|
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.FORCED)
|
||||||
|
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved set 'forced' disposition"
|
||||||
|
|
||||||
|
else:
|
||||||
|
|
||||||
|
def f(assertObj: dict):
|
||||||
|
if not 'tracks' in assertObj.keys():
|
||||||
|
raise KeyError("assertObj does not contain key 'tracks'")
|
||||||
|
trackDescriptors = assertObj['tracks']
|
||||||
|
|
||||||
|
# source subIndex 0
|
||||||
|
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
||||||
|
# source subIndex 1
|
||||||
|
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
||||||
|
|
||||||
|
return f
|
||||||
|
|
||||||
|
|
||||||
|
def shouldFail(self):
|
||||||
|
return False
|
@ -0,0 +1,97 @@
|
|||||||
|
import os, sys, importlib, glob, inspect
|
||||||
|
|
||||||
|
from ffx.track_disposition import TrackDisposition
|
||||||
|
from .disposition_combinator_3 import DispositionCombinator3
|
||||||
|
|
||||||
|
|
||||||
|
class DispositionCombinator31(DispositionCombinator3):
|
||||||
|
|
||||||
|
VARIANT = 'D100'
|
||||||
|
|
||||||
|
|
||||||
|
def __init__(self, context = None,
|
||||||
|
createPresets: bool = False):
|
||||||
|
super().__init__(context)
|
||||||
|
|
||||||
|
self.__createPresets = createPresets
|
||||||
|
|
||||||
|
def getVariant(self):
|
||||||
|
return DispositionCombinator31.VARIANT
|
||||||
|
|
||||||
|
|
||||||
|
def getPayload(self):
|
||||||
|
|
||||||
|
if self.__createPresets:
|
||||||
|
subtrack0 = set()
|
||||||
|
subtrack1 = set()
|
||||||
|
subtrack2 = set([TrackDisposition.DEFAULT])
|
||||||
|
else:
|
||||||
|
subtrack0 = set([TrackDisposition.DEFAULT])
|
||||||
|
subtrack1 = set()
|
||||||
|
subtrack2 = set()
|
||||||
|
|
||||||
|
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
||||||
|
# so some checks for preserved dispositions are omitted for now
|
||||||
|
if self.__createPresets:
|
||||||
|
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
||||||
|
subtrack1.add(TrackDisposition.FORCED) # DESCRIPTIONS
|
||||||
|
# subtrack2.add(TrackDisposition.HEARING_IMPAIRED) # HEARING_IMPAIRED
|
||||||
|
|
||||||
|
return (subtrack0,
|
||||||
|
subtrack1,
|
||||||
|
subtrack2)
|
||||||
|
|
||||||
|
def createAssertFunc(self):
|
||||||
|
|
||||||
|
if self.__createPresets:
|
||||||
|
|
||||||
|
def f(assertObj: dict):
|
||||||
|
|
||||||
|
if not 'tracks' in assertObj.keys():
|
||||||
|
raise KeyError("assertObj does not contain key 'tracks'")
|
||||||
|
|
||||||
|
trackDescriptors = assertObj['tracks']
|
||||||
|
|
||||||
|
# source subIndex 0
|
||||||
|
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
||||||
|
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
||||||
|
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved default disposition"
|
||||||
|
|
||||||
|
# source subIndex 1
|
||||||
|
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
||||||
|
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.FORCED)
|
||||||
|
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved default disposition"
|
||||||
|
|
||||||
|
# source subIndex 2
|
||||||
|
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
||||||
|
# assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.HEARING_IMPAIRED)
|
||||||
|
# ), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved default disposition"
|
||||||
|
|
||||||
|
else:
|
||||||
|
|
||||||
|
def f(assertObj: dict):
|
||||||
|
|
||||||
|
if not 'tracks' in assertObj.keys():
|
||||||
|
raise KeyError("assertObj does not contain key 'tracks'")
|
||||||
|
|
||||||
|
trackDescriptors = assertObj['tracks']
|
||||||
|
|
||||||
|
# source subIndex 0
|
||||||
|
assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not set default disposition"
|
||||||
|
|
||||||
|
# source subIndex 1
|
||||||
|
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
||||||
|
|
||||||
|
# source subIndex 2
|
||||||
|
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
||||||
|
return f
|
||||||
|
|
||||||
|
|
||||||
|
def shouldFail(self):
|
||||||
|
return False
|
@ -0,0 +1,89 @@
|
|||||||
|
import os, sys, importlib, glob, inspect
|
||||||
|
|
||||||
|
from ffx.track_disposition import TrackDisposition
|
||||||
|
from .disposition_combinator_3 import DispositionCombinator3
|
||||||
|
|
||||||
|
|
||||||
|
class DispositionCombinator32(DispositionCombinator3):
|
||||||
|
|
||||||
|
VARIANT = 'D010'
|
||||||
|
|
||||||
|
|
||||||
|
def __init__(self, context = None,
|
||||||
|
createPresets: bool = False):
|
||||||
|
super().__init__(context)
|
||||||
|
|
||||||
|
self.__createPresets = createPresets
|
||||||
|
|
||||||
|
def getVariant(self):
|
||||||
|
return DispositionCombinator32.VARIANT
|
||||||
|
|
||||||
|
|
||||||
|
def getPayload(self):
|
||||||
|
|
||||||
|
if self.__createPresets:
|
||||||
|
subtrack0 = set([TrackDisposition.DEFAULT])
|
||||||
|
subtrack1 = set()
|
||||||
|
subtrack2 = set()
|
||||||
|
else:
|
||||||
|
subtrack0 = set()
|
||||||
|
subtrack1 = set([TrackDisposition.DEFAULT])
|
||||||
|
subtrack2 = set()
|
||||||
|
|
||||||
|
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
||||||
|
# so some checks for preserved dispositions are omitted for now
|
||||||
|
if self.__createPresets:
|
||||||
|
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
||||||
|
# subtrack1.add(TrackDisposition.DESCRIPTIONS) # DESCRIPTIONS
|
||||||
|
subtrack2.add(TrackDisposition.FORCED) # HEARING_IMPAIRED
|
||||||
|
|
||||||
|
return (subtrack0,
|
||||||
|
subtrack1,
|
||||||
|
subtrack2)
|
||||||
|
|
||||||
|
|
||||||
|
def createAssertFunc(self):
|
||||||
|
|
||||||
|
if self.__createPresets:
|
||||||
|
|
||||||
|
def f(assertObj: dict):
|
||||||
|
if not 'tracks' in assertObj.keys():
|
||||||
|
raise KeyError("assertObj does not contain key 'tracks'")
|
||||||
|
trackDescriptors = assertObj['tracks']
|
||||||
|
|
||||||
|
# source subIndex 0
|
||||||
|
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
||||||
|
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
||||||
|
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set default disposition"
|
||||||
|
# source subIndex 1
|
||||||
|
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not set default disposition"
|
||||||
|
# assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DESCRIPTIONS)
|
||||||
|
# ), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved descriptions disposition"
|
||||||
|
# source subIndex 2
|
||||||
|
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
||||||
|
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.FORCED)
|
||||||
|
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved default disposition"
|
||||||
|
|
||||||
|
else:
|
||||||
|
|
||||||
|
def f(assertObj: dict):
|
||||||
|
if not 'tracks' in assertObj.keys():
|
||||||
|
raise KeyError("assertObj does not contain key 'tracks'")
|
||||||
|
trackDescriptors = assertObj['tracks']
|
||||||
|
|
||||||
|
# source subIndex 0
|
||||||
|
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
||||||
|
# source subIndex 1
|
||||||
|
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not set default disposition"
|
||||||
|
# source subIndex 2
|
||||||
|
assert (not trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has set default disposition"
|
||||||
|
return f
|
||||||
|
|
||||||
|
def shouldFail(self):
|
||||||
|
return False
|
@ -0,0 +1,89 @@
|
|||||||
|
import os, sys, importlib, glob, inspect
|
||||||
|
|
||||||
|
from ffx.track_disposition import TrackDisposition
|
||||||
|
from .disposition_combinator_3 import DispositionCombinator3
|
||||||
|
|
||||||
|
|
||||||
|
class DispositionCombinator33(DispositionCombinator3):
|
||||||
|
|
||||||
|
VARIANT = 'D001'
|
||||||
|
|
||||||
|
|
||||||
|
def __init__(self, context = None,
|
||||||
|
createPresets: bool = False):
|
||||||
|
super().__init__(context)
|
||||||
|
|
||||||
|
self.__createPresets = createPresets
|
||||||
|
|
||||||
|
def getVariant(self):
|
||||||
|
return DispositionCombinator33.VARIANT
|
||||||
|
|
||||||
|
|
||||||
|
def getPayload(self):
|
||||||
|
|
||||||
|
if self.__createPresets:
|
||||||
|
subtrack0 = set()
|
||||||
|
subtrack1 = set([TrackDisposition.DEFAULT])
|
||||||
|
subtrack2 = set()
|
||||||
|
else:
|
||||||
|
subtrack0 = set()
|
||||||
|
subtrack1 = set()
|
||||||
|
subtrack2 = set([TrackDisposition.DEFAULT])
|
||||||
|
|
||||||
|
#NOTE: Current ffmpeg version will not set most of the dispositions on arbitrary tracks
|
||||||
|
# so some checks for preserved dispositions are omitted for now
|
||||||
|
if self.__createPresets:
|
||||||
|
# subtrack0.add(TrackDisposition.COMMENT) # COMMENT
|
||||||
|
subtrack1.add(TrackDisposition.FORCED) # DESCRIPTIONS
|
||||||
|
# subtrack2.add(TrackDisposition.HEARING_IMPAIRED) # HEARING_IMPAIRED
|
||||||
|
|
||||||
|
return (subtrack0,
|
||||||
|
subtrack1,
|
||||||
|
subtrack2)
|
||||||
|
|
||||||
|
|
||||||
|
def createAssertFunc(self):
|
||||||
|
|
||||||
|
if self.__createPresets:
|
||||||
|
|
||||||
|
def f(assertObj: dict):
|
||||||
|
if not 'tracks' in assertObj.keys():
|
||||||
|
raise KeyError("assertObj does not contain key 'tracks'")
|
||||||
|
trackDescriptors = assertObj['tracks']
|
||||||
|
|
||||||
|
# source subIndex 0
|
||||||
|
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
||||||
|
# assert (trackDescriptors[0].getDispositionFlag(TrackDisposition.COMMENT)
|
||||||
|
# ), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has not preserved set default disposition"
|
||||||
|
# source subIndex 1
|
||||||
|
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
||||||
|
assert (trackDescriptors[1].getDispositionFlag(TrackDisposition.FORCED)
|
||||||
|
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has not preserved descriptions disposition"
|
||||||
|
# source subIndex 2
|
||||||
|
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
||||||
|
# assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.HEARING_IMPAIRED)
|
||||||
|
# ), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not preserved default disposition"
|
||||||
|
|
||||||
|
else:
|
||||||
|
|
||||||
|
def f(assertObj: dict):
|
||||||
|
if not 'tracks' in assertObj.keys():
|
||||||
|
raise KeyError("assertObj does not contain key 'tracks'")
|
||||||
|
trackDescriptors = assertObj['tracks']
|
||||||
|
|
||||||
|
# source subIndex 0
|
||||||
|
assert (not trackDescriptors[0].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #0 index={trackDescriptors[0].getIndex()} [{trackDescriptors[0].getType().label()}:{trackDescriptors[0].getSubIndex()}] has set default disposition"
|
||||||
|
# source subIndex 1
|
||||||
|
assert (not trackDescriptors[1].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #1 index={trackDescriptors[1].getIndex()} [{trackDescriptors[1].getType().label()}:{trackDescriptors[1].getSubIndex()}] has set default disposition"
|
||||||
|
# source subIndex 2
|
||||||
|
assert (trackDescriptors[2].getDispositionFlag(TrackDisposition.DEFAULT)
|
||||||
|
), f"Stream #2 index={trackDescriptors[2].getIndex()} [{trackDescriptors[2].getType().label()}:{trackDescriptors[2].getSubIndex()}] has not set default disposition"
|
||||||
|
return f
|
||||||
|
|
||||||
|
def shouldFail(self):
|
||||||
|
return False
|
@ -0,0 +1,156 @@
|
|||||||
|
import os, sys, importlib, glob, inspect, itertools, click
|
||||||
|
|
||||||
|
from ffx.track_type import TrackType
|
||||||
|
|
||||||
|
from ffx.track_descriptor import TrackDescriptor
|
||||||
|
from ffx.media_descriptor import MediaDescriptor
|
||||||
|
|
||||||
|
from .media_combinator import MediaCombinator
|
||||||
|
|
||||||
|
from .disposition_combinator_2 import DispositionCombinator2
|
||||||
|
from .track_tag_combinator_2 import TrackTagCombinator2
|
||||||
|
from .permutation_combinator_2 import PermutationCombinator2
|
||||||
|
from .media_tag_combinator import MediaTagCombinator
|
||||||
|
|
||||||
|
|
||||||
|
class MediaCombinator2(MediaCombinator):
|
||||||
|
|
||||||
|
VARIANT = 'VASS'
|
||||||
|
|
||||||
|
|
||||||
|
def __init__(self, context = None,
|
||||||
|
createPresets: bool = False):
|
||||||
|
super().__init__(context)
|
||||||
|
|
||||||
|
self.__createPresets = createPresets
|
||||||
|
|
||||||
|
def getVariant(self):
|
||||||
|
return MediaCombinator2.VARIANT
|
||||||
|
|
||||||
|
|
||||||
|
def getPayload(self,
|
||||||
|
subtitleDispositionTuple = (set(), set()),
|
||||||
|
subtitleTagTuple = ({}, {})):
|
||||||
|
|
||||||
|
kwargs = {}
|
||||||
|
kwargs[TrackDescriptor.CONTEXT_KEY] = self._context
|
||||||
|
kwargs[TrackDescriptor.INDEX_KEY] = 0
|
||||||
|
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = 0
|
||||||
|
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.VIDEO
|
||||||
|
kwargs[TrackDescriptor.SUB_INDEX_KEY] = 0
|
||||||
|
trackDescriptor0 = TrackDescriptor(**kwargs)
|
||||||
|
|
||||||
|
kwargs = {}
|
||||||
|
kwargs[TrackDescriptor.CONTEXT_KEY] = self._context
|
||||||
|
kwargs[TrackDescriptor.INDEX_KEY] = 1
|
||||||
|
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = 1
|
||||||
|
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.AUDIO
|
||||||
|
kwargs[TrackDescriptor.SUB_INDEX_KEY] = 0
|
||||||
|
trackDescriptor1 = TrackDescriptor(**kwargs)
|
||||||
|
|
||||||
|
kwargs = {}
|
||||||
|
kwargs[TrackDescriptor.CONTEXT_KEY] = self._context
|
||||||
|
kwargs[TrackDescriptor.INDEX_KEY] = 2
|
||||||
|
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = 2
|
||||||
|
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.SUBTITLE
|
||||||
|
kwargs[TrackDescriptor.SUB_INDEX_KEY] = 0
|
||||||
|
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = subtitleDispositionTuple[0]
|
||||||
|
kwargs[TrackDescriptor.TAGS_KEY] = subtitleTagTuple[0]
|
||||||
|
trackDescriptor2 = TrackDescriptor(**kwargs)
|
||||||
|
|
||||||
|
kwargs = {}
|
||||||
|
kwargs[TrackDescriptor.CONTEXT_KEY] = self._context
|
||||||
|
kwargs[TrackDescriptor.INDEX_KEY] = 3
|
||||||
|
kwargs[TrackDescriptor.SOURCE_INDEX_KEY] = 3
|
||||||
|
kwargs[TrackDescriptor.TRACK_TYPE_KEY] = TrackType.SUBTITLE
|
||||||
|
kwargs[TrackDescriptor.SUB_INDEX_KEY] = 1
|
||||||
|
kwargs[TrackDescriptor.DISPOSITION_SET_KEY] = subtitleDispositionTuple[1]
|
||||||
|
kwargs[TrackDescriptor.TAGS_KEY] = subtitleTagTuple[1]
|
||||||
|
trackDescriptor3 = TrackDescriptor(**kwargs)
|
||||||
|
|
||||||
|
kwargs = {}
|
||||||
|
kwargs[MediaDescriptor.CONTEXT_KEY] = self._context
|
||||||
|
kwargs[MediaDescriptor.TRACK_DESCRIPTOR_LIST_KEY] = [trackDescriptor0,
|
||||||
|
trackDescriptor1,
|
||||||
|
trackDescriptor2,
|
||||||
|
trackDescriptor3]
|
||||||
|
|
||||||
|
mediaDescriptor = MediaDescriptor(**kwargs)
|
||||||
|
# mediaDescriptor.reindexSubIndices()
|
||||||
|
|
||||||
|
return mediaDescriptor
|
||||||
|
|
||||||
|
|
||||||
|
def assertFunc(self, testObj = {}):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def shouldFail(self):
|
||||||
|
return False
|
||||||
|
|
||||||
|
def getYield(self):
|
||||||
|
|
||||||
|
for MTC in MediaTagCombinator.getAllClassReferences():
|
||||||
|
for DC2 in DispositionCombinator2.getAllClassReferences():
|
||||||
|
for TC2 in TrackTagCombinator2.getAllClassReferences():
|
||||||
|
|
||||||
|
dc2 = DC2(self._context)
|
||||||
|
tc2 = TC2(self._context)
|
||||||
|
|
||||||
|
mtc = MTC(self._context)
|
||||||
|
|
||||||
|
yObj = {}
|
||||||
|
|
||||||
|
yObj['identifier'] = self.getIdentifier()
|
||||||
|
yObj['variants'] = [self.getVariant(),
|
||||||
|
f"S:{dc2.getVariant()}",
|
||||||
|
f"S:{tc2.getVariant()}",
|
||||||
|
mtc.getVariant()]
|
||||||
|
|
||||||
|
yObj['payload'] = self.getPayload(dc2.getPayload(),
|
||||||
|
tc2.getPayload())
|
||||||
|
|
||||||
|
yObj['assertSelectors'] = ['M', 'SD', 'ST', 'MT']
|
||||||
|
yObj['assertFuncs'] = [self.assertFunc,
|
||||||
|
dc2.createAssertFunc(),
|
||||||
|
tc2.createAssertFunc(),
|
||||||
|
mtc.createAssertFunc()]
|
||||||
|
|
||||||
|
yObj['shouldFail'] = (self.shouldFail()
|
||||||
|
| dc2.shouldFail()
|
||||||
|
| tc2.shouldFail()
|
||||||
|
| mtc.shouldFail())
|
||||||
|
|
||||||
|
yieldObj = {'target': yObj}
|
||||||
|
|
||||||
|
if self.__createPresets:
|
||||||
|
|
||||||
|
dc2_p = DC2(self._context, createPresets = True)
|
||||||
|
tc2_p = TC2(self._context, createPresets = True)
|
||||||
|
|
||||||
|
mtc_p = MTC(self._context, createPresets = True)
|
||||||
|
|
||||||
|
yObj_p = {}
|
||||||
|
|
||||||
|
yObj_p['identifier'] = self.getIdentifier()
|
||||||
|
yObj_p['variants'] = [self.getVariant(),
|
||||||
|
f"S:{dc2_p.getVariant()}",
|
||||||
|
f"S:{tc2_p.getVariant()}",
|
||||||
|
mtc_p.getVariant()]
|
||||||
|
|
||||||
|
yObj_p['payload'] = self.getPayload(dc2_p.getPayload(),
|
||||||
|
tc2_p.getPayload())
|
||||||
|
|
||||||
|
yObj_p['assertSelectors'] = ['M', 'SD', 'ST', 'MT']
|
||||||
|
yObj_p['assertFuncs'] = [self.assertFunc,
|
||||||
|
dc2_p.createAssertFunc(),
|
||||||
|
tc2_p.createAssertFunc(),
|
||||||
|
mtc_p.createAssertFunc()]
|
||||||
|
|
||||||
|
yObj_p['shouldFail'] = (self.shouldFail()
|
||||||
|
| dc2_p.shouldFail()
|
||||||
|
| tc2_p.shouldFail()
|
||||||
|
| mtc_p.shouldFail())
|
||||||
|
|
||||||
|
yieldObj['preset'] = yObj_p
|
||||||
|
|
||||||
|
yield yieldObj
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue